IOS 15.4 impact on Microsoft Teams Mobile (Apple Devices)

Microsoft and Apple always has this mutual way of complementing each other with their security updates to the software. This is one other similar update from Apple which has caused a little wrinkle on the Microsoft Teams/ Outlook for IOS devices. Every time a mobile user(in our case its the IOS user) tries to dial in to a meeting by clicking on the dial in number from the respective teams meeting redirects them to the dial pad and the user can dial the number to join the meeting.

But here is where Apple has decided to put an end to such easy dialing option (considering few potential security threats). After upgrading the Apple devices to IOS 15.4 users will not be able to use this feature. Microsoft has already published an article on the same and Apple confirmed that this is by design.

When will this be resolved? Apple might or might not fix them, we may have to wait for an official confirmation from Apple on the same with no ETA mentioned.

Concluding this to be a behavior by design and there are no actual workarounds unless you are okay to long press the link, copy the number with the code and paste it to the dialpad and join the meeting.

Android users can still enjoy the feature.

Cheers,

Ganesh G

Safely remove Public folders from Org – Exchange 2013

I am considering that those who are here reading this blog has good understanding about how public folder is structured and how it works in Exchange 2013, If you are pretty new to this I would suggest you to go through Public folders Exchange 2013 before reading further to ensure this is not above your head,

Scale the organization and get the below info,

  • No. of public folders – Getpublicfolderstatistics.ps1 script
  • No. of public folder mailboxes
  • No. of root public folders
  • Get the content mailbox info for each public folder
  • Export the public folder permissions to a csv which can be used if in case we need to restore the public folders and reapply the permissions

Here we are exporting the permissions specific to the root folders along with its sub folders,

$PFroot = read-Host "Enter the Publicfolder-Root"
Write-host "You Entered $PFroot"
Get-PublicFolder "\$PFroot" -Recurse -ResultSize "unlimited" | Get-PublicFolderClientPermission | Select-Object Identity,@{Expression={$_.User};Label="User";},@{Expression={$_.AccessRights};Label="AccessRights";} | Export-Csv C:\Temp\Publicfolderclinetpermission_$PFroot.csv -NoTypeInformation

Once the client permissions are exported, we can remove the permissions on the public folders using the below script,

$removepfroot = read-host "Enter the Root Public folder where the permission has to be removed"
$AllPublicFolders = Get-publicFolder \$removepfroot -recurse

foreach($Pf in $AllPublicFolders )
{
Get-PublicFolderClientPermission $Pf | Foreach{ Remove-PublicFolderClientPermission $_.Identity -User $_.User -Confirm:$false }
}

Then we can remove the public folders via EAC or using EMS,

TechNet has a simple command to remove

What is the Back-out plan ?

I will detail the restoring procedures in my next post, Just an heads up on what will be covered in the upcoming post,

  1. Export the content mailbox information for each public folders
  2. Ways to restore public folders along with the sub folders
  3. How do we restore permissions back into the restored folders

To make it easy one much know about the Primary and Secondary Hierarchy,

Primary Hierarchy – The public folder mailbox that hosts writable copy of the public folder hierarchy. The first public folder mailbox created in an Exchange Organization is the primary hierarchy mailbox

Secondary Hierarchy – All other public folder mailboxes in an Exchange organization, except the primary hierarchy, which store read-only copy of the public folder hierarchy.

Happy Learning !

Cheers,

GaGa

 

Mailbox is currently unavailable in EMC and EMS – Exchange 2010

 

Mailbox is currently unavailable in EMC and EMS – Exchange 2010

Scenario:

  • Unable to connect to a mailbox which wasn’t disconnected or deleted.
  • When trying to access the mailbox , e.g., opening the mailbox calendar folder

        Below error occurs,

 Error(s):

        “The set of folders can’t be opened, the attempt to log-in the Microsoft exchange server has failed”

 

  • Mailbox export to pst also failed,
  • Tried login to the user mailbox via owa and outlook,
  • Both failed with an error stating unable to connect to mailbox

 

Solution:

  •     Run MAPI connectivity test on the user’s mailbox to check if everything is fine with the mapi connectivity,

 

Test-MAPIConnectivity “tober” | fl

 

RunspaceId : c55b4756-38d9-4bcd-81dd-d0800ec6ce7d

Server     : W8-EXCH-MBOX-E1

Database   : MBOX1 DB2

Mailbox    : tober

Result     : *FAILURE*

Latency    : 00:00:00

Error      : [Microsoft.Exchange.Data.Storage.StorageTransientException]: Cannot open mailbox /o=xxxxxx/ou=Exchange A

ministrative Group (FYDIBOHF23SPDLT)/cn=Configuration/cn=Servers/cn=xxxxxx/cn=Microsoft System A

tendant. Inner error [Microsoft.Mapi.MapiExceptionMailboxQuarantined]: MapiExceptionMailboxQuarantined: U

able to open message store. (hr=0x80004005, ec=2611)

Diagnostic context:

Lid: 55847   EMSMDBPOOL.EcPoolSessionDoRpc called [length=152]

Lid: 43559   EMSMDBPOOL.EcPoolSessionDoRpc returned [ec=0xA33][length=274][latency=0]

Lid: 32881   StoreEc: 0xA33

Lid: 50035

Lid: 64625   StoreEc: 0xA33

Lid: 50128

Lid: 1494    —- Remote Context Beg —-

Lid: 26426   ROP: ropLogon [254]

Lid: 22787   Error: 0x0

Lid: 13032   StoreEc: 0x8004010F

Lid: 25848

Lid: 7588    StoreEc: 0x8004010F

Lid: 25840

Lid: 6564    StoreEc: 0x8004010F

Lid: 27395   Error: 0x0

Lid: 61867

Lid: 37291   StoreEc: 0xA33

Lid: 53675

Lid: 12716   StoreEc: 0xA33

Lid: 20794

Lid: 28474   StoreEc: 0xA33

Lid: 22330   dwParam: 0x0        Msg: 14.03.0174.001:W8-EXCH-MBOX-E1

Lid: 1750    —- Remote Context End —-

Lid: 50288

Lid: 23354   StoreEc: 0xA33

Lid: 25913

Lid: 21817   ROP Failure: 0xA33

Lid: 26297

Lid: 16585   StoreEc: 0xA33

Lid: 32441

Lid: 1706    StoreEc: 0xA33

Lid: 24761

Lid: 20665   StoreEc: 0xA33

Lid: 25785

Lid: 29881   StoreEc: 0xA33

Identity   :

IsValid    : True

 

Mailbox is quarantined

  • Check the registry entry to find if the mailbox is found under the appropriate quarantined sub key,

HKLM\SYSTEM\CurrentControlSet\Services\MSExchangeIS\<Server Name>\Private-{db guid}\QuarantinedMailboxes\{mailbox guid}

 

  • Get the Mailbox and database GUID via powershell

 

Get-mailbox “MailboxName” | fl *GUID*

 

  • Deleted the mailboxguid subkey
  • Dismounted and remount the database
  • Again Run the Mapi connectivity test to confirm successful report
  • Now you can successfully login and send emails
  • Would Suggested to move the mailbox to a different database on the same/different server to eliminate any corruption

Possible Cause for the mailbox to be quarantined is due to the poison messages

Support Links,

http://technet.microsoft.com/en-us/library/gg490642(v=exchg.80).aspx

http://support.microsoft.com/kb/2603736

Regards,

Ganesh G

Cross Site DAG / DAC Mode – Scenarios

Cross Site DAG / DAC Mode – Scenarios

Here is a scenario where in we have a cross site DAG and we have the core discussion on how it works during a disaster (WAN down , Primary site down).
So please go though this and post me with your feedback and corrections if any,

If you wish to add more to this please feel free to add to it

Environment:

Two Sites

Primary – 10 database and copies

2 CAS/HUB

5 MBX – 1 witness

DR – Copies

5 MBX and 2nd witness (alternate witness)

2 CAS/HUB

 

Image

1 DAG – 10 Databases

Research:

 

Primary Site:

2 CAS/HUB – Primary Witness

5 MBX – 1 witness

Secondary Site:

5 MBX and 2nd witness (Alternate witness)

2 CAS/HUB

Based on the current deployment where we have 2 sites and identical number of nodes (cluster-wise) on both sides, what would happen if the link goes down while servers are still up?

Scenario 1: You have two sites and the WAN link between the sites goes down,

 Image

One DAG with 10 members and 10 databases,

WAN link between the sites goes down (DAC doesn’t come into picture)

  1. Once when the WAN link goes down, the communication between the sites are disrupted.
  2. As a result the Secondary DR Site will loses its quorum and will not be able to continue,
  3. Primary site still can maintain a quorum as it has 6 votes (5 Nodes + 1 FSW). (Node and File Share Majority)
  4. Also the databases which were active in the DR site will be failed over to Primary site based on the preferences, which will be taken care by PAM (primary active manager) active on the Primary site.

Note: If AD replication between the sites are fine, then the databases will be failed over to the primary site, else database will be dismounted on the DR site and we need to manually use the command to mount them on the Primary site,

Move-ActiveMailboxDatabase <Database Name> -ActivateOnServer <target server>

  1. Now the DAG is completely operational.
  2. If the WAN link comes back online, then a manual interruption is required to restore the services again, like moving the active database copies to the DR site.

Scenario 2:

 a.       Primary Site goes down – DAGONLY (Dac mode is turned on)

 Datacenter Activation Mode is a mode specifically for multisite Data Availability Groups with 3 or more members.

It is there to stop datacenter DAG split brain syndrome with the help of a protocol calledDatacenter Activation Coordination Protocol (DACP)

DAC operates this using literally a bit that it flips 0 or 1. “0” meaning it cannot mount a database and upon talking to other DAG members using DACP and finding another server with 1, will mount the databases as it knows it is allowed to.

 Image

  1. Now the Primary site is down due to some reason, it has lost its quorum.
  2. As the Dag is not operational, hence a datacenter switchover is required
  3. Steps involved in Datacenter switchover,
  4. Stop the primary site,

 Stop-DatabaseAvailabilityGroup -Identity DAG1 -ActiveDirectorySite <Primary Site> –ConfigurationOnly

   5. Stop DAG members,

 Stop-DatabaseAvailabilityGroup -Identity DAG1 -MailboxServer <DAGmembersinPrimarySite> –ConfigurationOnly

 

   6. Restore Dag on the DR site using the following command,

 Restore-DatabaseAvailabilityGroup -Identity DAG1 -ActiveDirectorySite <DR Site> -AlternateWitnessServer <HUBServer> -AlternateWitnessDirectory <WitnessDirectory Path>

 The Restore-DatabaseAvailabilityGroup cmdlet performs several operations that affect the structure and membership of the DAG’s cluster. This task will:

  1. Forcibly evict the servers listed on the StoppedServersList from the DAG’s cluster, thereby reestablishing quorum for the cluster enabling the surviving DAG members to start and provide service.
  2. Configure the DAG to use the alternate witness server if there is an even number of surviving DAG members.

 7. Mount the database on the DR Site,

Move-ActiveMailboxDatabase -Server <DAGMemberinPrimarySite> -ActivateOnServer <DAGMemberinDRSite> -SkipActiveCopyChecks –SkipClientExperienceChecks –SkipHealthChecks -SkipLagChecks

 

Scenario 3:

 

  1. b.      Primary Site goes down –(Dag mode is turned OFF)

 

When the DAG isn’t in DAC mode, the specific actions to terminate any surviving DAG members in the primary datacenter are as follows:

  1. The DAG members in the primary datacenter must be forcibly evicted from the DAG’s underlying cluster by running the following commands on each member:

net stop clussvc

cluster <DAGName> node <DAGMemberName> /forcecleanup

 

  1. The DAG members in the second datacenter must now be restarted and then used to complete the eviction process from the second datacenter.

Stop the Cluster service on each DAG member in the second datacenter by running the following command on each member:

net stop clussvc

 

  1. On a DAG member in the second datacenter, force a quorum start of the Cluster service by running the following command:

 

net start clussvc /forcequorum

 

  1. Open the Failover Cluster Management tool and connect to the DAG’s underlying cluster. Expand the cluster, and then expand Nodes. Right-click each node in the primary datacenter, select More Actions, and then selectEvict. When you’re done evicting the DAG members in the primary datacenter, close the Failover Cluster Management tool.

When the DAG isn’t in DAC mode, the steps to complete activation of the mailbox servers in the second datacenter are as follows:

  1. The quorum must be modified based on the number of DAG members in the second datacenter.

If there’s an odd number of DAG members, change the DAG quorum model from a Node a File Share Majority to a Node Majority quorum by running the following command:

cluster <DAGName> /quorum /nodemajority

  1. If there’s an even number of DAG members, reconfigure the witness server and directory by running the following command in the Exchange Management Shell:

 

Set-DatabaseAvailabilityGroup <DAGName> -WitnessServer <ServerName>

 

  1. Start the Cluster service on any remaining DAG members in the second datacenter by running the following command:

 

net start clussvc

  1. Perform server switchovers to activate the mailbox databases in the DAG by running the following command for each DAG member:

Move-ActiveMailboxDatabase -Server <DAGMemberinPrimarySite> -ActivateOnServer <DAGMemberinSecondSite>

  1. Mount the mailbox databases on each DAG member in the second site by running the following command:

Get-MailboxDatabase <DAGMemberinSecondSite> | Mount-Database

More information on DAC:

How DAC mode works :  http://technet.microsoft.com/en-us/library/dd979790(v=exchg.141).aspx

Understanding DAC      :  http://technet.microsoft.com/en-us/library/dd351049.aspx

Regards,

Ganesh G