Hmm...This Might Work

Solutions from a day long since past
posts - 20, comments - 7, trackbacks - 0

SharePoint 2013 Managed Metadata Service Application (MMSA) Gremlins

 

This post objective: To simply document something I can’t explain.

First the environment has a single WFE and two different app servers (APP1 and APP2). The SharePoint environment in question is running 2013 RTM bits (I know)…

The Timeline
  • Roughly 24 hours ago

An unplanned deployment of a custom farm solution. Mostly just an automated deployment of content pages with custom webparts. Nothing out of the ordinary here.

The standard post deploy testing revealed nothing out of the ordinary, other than Search and SSRS not playing well with each other in this production farm.

<rant>It seems the Microsoft support solution is to recreate the Search Service App, then SSRS will play nice in the logs. I’ll tell you this works but not what I’d call acceptable…</rant> 

  • 10 hours ago

Notification of Managed Metadata navigation malfunction by user

  • 6 hours ago

Start of troubleshooting MMSA. Service application interface in CA had error indicating “The Managed Metadata Service or Connection is currently not available. The Application Pool or Managed Metadata Web Service may not have been started. Please Contact your Administrator.”  Naturally I figured there was a stopped application pool, which there was, it just wasn’t one running this service.

Next I tried to open the service connection properties, only to get this error (from ULS).  Application error when access /_admin/ManageMetadataProxy.aspx, Error=Retrieving the COM class factory for component with CLSID {BDEADF26-C265-11D0-BCED-00A0C90AB50F} failed due to the following error: 800703fa Illegal operation attempted on a registry key that has been marked for deletion. (Exception from HRESULT: 0x800703FA). 

At this point I’m at a loss an figure I’ll try to restart the services via CA. Just for good measure I started the service on each server (WFE,APP1,APP2). Same results, nothing changed

Read a blog suggesting an unlikely event of the application pool need access to the service application. It worked well prior to this event, but for good measure let’s add it in.   Same results, nothing changed

  • 3 hours ago

Resigned to throw the hail marry of an IISReset, just to see if it will commit anything changed to this point. Sent notification to enterprise giving heads up of unplanned reset.

  • 2 hours ago

Getting Ready to go for lunch, figured I take a quick look before I throw the switch on the IISReset. Before checking the MMSA, I ran Get-CacheClusterHealth only to get an error “No valid cluster settings were provided with Use-CacheCluster”.  Not a big deal, anticipated this so I ran Use-CacheCluster then Get-CacheClusterHealth once more. This time I received the expected Cluster health statistics. Getting somewhat anxious to make some headway I figured I’d flip back over to the MMSA to make sure it was in fact still broken.

So yeah, as you might have guessed. It automagically started working.

  • 4 hours in the future

A cold beer or maybe…just maybe…a good shot of tequila.

Closing

In the end I can only blame the events on Gremlins, someone clearly feed the SharePoint Mogwai after dark and they had fun wreaking havoc. I can only send thanks to Rambo-Gizmo for eradicating the issue.

What I hate most of today’s event is the numerous posts such as this one by SharePointBabe coming to the solution it’s just quicker to rebuild the service application. My issue here is this is not really an acceptable solution. I wouldn’t have such an issue if Microsoft support didn’t take the same approach…but then I’ve already had my rant for this post.

Print | posted on Thursday, November 7, 2013 3:09 PM | Filed Under [ SharePoint Rants SharePoint 2013 ]

Feedback

No comments posted yet.

Post Comment

Title  
Name  
Email
Url
Comment   
Please add 4 and 6 and type the answer here:

Powered by: