Search Results

Search found 75 results on 3 pages for 'ola karlsson'.

Page 1/3 | 1 2 3  | Next Page >

  • Update from Ola Hallengren: Target multiple devices during SQL Server backup

    - by Greg Low
    Ola has produced another update of his database management scripts. If you haven't taken a look at them, you should. At the very least, they'll give you good ideas about what to implement and how others have done so. The latest update allows targeting multiple devices during backup. This is available in native SQL Server backup and can be helpful with very large databases. Ola's scripts now support it as well.Details are here: http://ola.hallengren.com/sql-server-backup.html http://ola.hallengren.com/versions.html The following example shows it backing up to 4 files on 4 drives, one file on each drive:EXECUTE dbo.DatabaseBackup@Databases = 'USER_DATABASES',@Directory = 'C:\Backup, D:\Backup, E:\Backup, F:\Backup',@BackupType = 'FULL',@Compress = 'Y',@NumberOfFiles = 4And this example shows backing up to 16 files on 4 drives, 4 files on each drive: EXECUTE dbo.DatabaseBackup@Databases = 'USER_DATABASES',@Directory = 'C:\Backup, D:\Backup, E:\Backup, F:\Backup',@BackupType = 'FULL',@Compress = 'Y',@NumberOfFiles = 16Ola mentioned that you can now back up to up to 64 drives. 

    Read the article

  • Run database checks but omit large tables or filegroups - New option in Ola Hallengren's Scripts

    - by Greg Low
    One of the things I've always wanted in DBCC CHECKDB is the option to omit particular tables from the check. The situation that I often see is that companies with large databases often have only one or two very large tables. They want to run a DBCC CHECKDB on the database to check everything except those couple of tables due to time constraints. I posted a request on the Connect site about time some time ago: https://connect.microsoft.com/SQLServer/feedback/details/611164/dbcc-checkdb-omit-tables-option The workaround from the product team was that you could script out the checks that you did want to carry out, rather than omitting the ones that you didn't. I didn't overly like this as a workaround as clients often had a very large number of objects that they did want to check and only one or two that they didn't. I've always been impressed with the work that our buddy Ola Hallengren has done on his maintenance scripts. He pinged me recently about my old Connect item and said he was going to implement something similar. The good news is that it's available now. Here are some examples he provided of the newly-supported syntax: EXECUTE dbo.DatabaseIntegrityCheck @Databases = 'AdventureWorks', @CheckCommands = 'CHECKDB' EXECUTE dbo.DatabaseIntegrityCheck @Databases = 'AdventureWorks', @CheckCommands = 'CHECKALLOC,CHECKTABLE,CHECKCATALOG', @Objects = 'AdventureWorks.Person.Address' EXECUTE dbo.DatabaseIntegrityCheck @Databases = 'AdventureWorks', @CheckCommands = 'CHECKALLOC,CHECKTABLE,CHECKCATALOG', @Objects = 'ALL_OBJECTS,-AdventureWorks.Person.Address' EXECUTE dbo.DatabaseIntegrityCheck @Databases = 'AdventureWorks', @CheckCommands = 'CHECKFILEGROUP,CHECKCATALOG', @FileGroups = 'AdventureWorks.PRIMARY' EXECUTE dbo.DatabaseIntegrityCheck @Databases = 'AdventureWorks', @CheckCommands = 'CHECKFILEGROUP,CHECKCATALOG', @FileGroups = 'ALL_FILEGROUPS,-AdventureWorks.PRIMARY' Note the syntax to omit an object from the list of objects and the option to omit one filegroup. Nice! Thanks Ola! You'll find details here: http://ola.hallengren.com/  

    Read the article

  • Run database checks but omit large tables or filegroups - New option in Ola Hallengren's Scripts

    - by Greg Low
    One of the things I've always wanted in DBCC CHECKDB is the option to omit particular tables from the check. The situation that I often see is that companies with large databases often have only one or two very large tables. They want to run a DBCC CHECKDB on the database to check everything except those couple of tables due to time constraints. I posted a request on the Connect site about time some time ago: https://connect.microsoft.com/SQLServer/feedback/details/611164/dbcc-checkdb-omit-tables-option The workaround from the product team was that you could script out the checks that you did want to carry out, rather than omitting the ones that you didn't. I didn't overly like this as a workaround as clients often had a very large number of objects that they did want to check and only one or two that they didn't. I've always been impressed with the work that our buddy Ola Hallengren has done on his maintenance scripts. He pinged me recently about my old Connect item and said he was going to implement something similar. The good news is that it's available now. Here are some examples he provided of the newly-supported syntax: EXECUTE dbo.DatabaseIntegrityCheck @Databases = 'AdventureWorks', @CheckCommands = 'CHECKDB' EXECUTE dbo.DatabaseIntegrityCheck @Databases = 'AdventureWorks', @CheckCommands = 'CHECKALLOC,CHECKTABLE,CHECKCATALOG', @Objects = 'AdventureWorks.Person.Address' EXECUTE dbo.DatabaseIntegrityCheck @Databases = 'AdventureWorks', @CheckCommands = 'CHECKALLOC,CHECKTABLE,CHECKCATALOG', @Objects = 'ALL_OBJECTS,-AdventureWorks.Person.Address' EXECUTE dbo.DatabaseIntegrityCheck @Databases = 'AdventureWorks', @CheckCommands = 'CHECKFILEGROUP,CHECKCATALOG', @FileGroups = 'AdventureWorks.PRIMARY' EXECUTE dbo.DatabaseIntegrityCheck @Databases = 'AdventureWorks', @CheckCommands = 'CHECKFILEGROUP,CHECKCATALOG', @FileGroups = 'ALL_FILEGROUPS,-AdventureWorks.PRIMARY' Note the syntax to omit an object from the list of objects and the option to omit one filegroup. Nice! Thanks Ola! You'll find details here: http://ola.hallengren.com/  

    Read the article

  • Ola Hallengren adds STATISTICS support to his solution

    - by AaronBertrand
    Last week, Ola published a very useful update to his Backup, Integrity Check and Index Optimization scripts : the solution now supports updating statistics. There are several options, such as only updating when the data has been modified and using the RESAMPLE and NORECOMPUTE options. An example call: EXEC dbo.IndexOptimize @Databases = 'USER_DATABASES' , @FragmentationHigh_LOB = 'INDEX_REBUILD_OFFLINE' , @FragmentationHigh_NonLOB = 'INDEX_REBUILD_ONLINE' , @FragmentationMedium_LOB = 'INDEX_REORGANIZE_STATISTICS_UPDATE'...(read more)

    Read the article

  • SQL Server Maintenance Utilities Update for SQL Server 2008 R2

    - by Greg Low
    Great to see that our friend Ola Hallengren has updated his maintenance utility scripts to deal with SQL Server 2008 R2. These scripts are highly regarded, particularly given the price: free ! You'll find them here: http://ola.hallengren.com/Versions.html Ola noted that the main change from 2008 is that backup compression is now supported in Standard Edition of SQL Server. That in itself is good news. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!...(read more)

    Read the article

  • Simultaneously calling multiple methods on a WCF service from silverlight

    - by ola karlsson
    A while back I had to debug some performance issues in an existing Silverlight app, as the problem / solution was a bit obscure and finding info about it was quite tricky, I thought I’d share, maybe it can help the next person with this problem. The App On start, the app would do a number of calls to different methods on a WCF service, this to populate the UI with the necessary data. Recently one of those services had been changed and was now taking quite a bit longer than it used to. This was resulting in quite a long loading time for the whole UI, which was set up so it wouldn’t let the user interact with anything, until all the service calls had finished. First I broke out the longer running service call from the others, then removed the constraint that it had to be loaded for the UI in general to become responsive. I also added a loading indicator just on that area of the UI, thinking that the main UI would load while this particular section could keep loading independently. The Problem However this is where things started to get a bit strange. I found that even after these changes, the main UI wouldn’t activate until the long running call returned. So now, I did what I should have done to start with, I got Fiddler out and had a look at what was really happening. What I found was that, once the call to the long running service method was placed, all subsequent call were waiting for that one to return before executing. Not having really worked with WCF previously or knowing much about it in general, I was stumped… I knew of the issues where Silverlight is restricted by the browsers networking features in regards to number of simultaneous connections etc. However that just didn’t seem to be the issue here, you can clearly see in Fiddler that there’s numerous calls, but they’re just not returning. I thought of the problem maybe being in the WCF service, but the calls were really not that complicated and surely the service should be able to handle a lot more than what I was throwing at it! So I did what every developer does in this type of scenario, I hit the search engines. I did a whole bunch of searching on things like “multiple simultaneous WCF calls from Silverlight” and “Calling long running WCF services from Silverlight” etc. etc. This however, pretty much got me nowhere, I found a whole heap of resources on how to do WCF calls from Silverlight but most of them were very basic and of no use what so ever. The fog is clearing It wasn’t until I came across the term “ WCF blocking calls” and started incorporating that in my searches I started to get somewhere. Those searches quite quickly brought me to the following thread in the Silverlight forum “Long-running WCF call blocking subsequent calls” which discussed the exact problem I was facing and the best part, one of the guys there had the solution! The short answer is in the forum post and the guys answering, has also done a more extensive blog post about it called “Silverlight, WCF, and ASP.Net Configuration Gotchas” which covers it very well.  So come on what’s the solution?! I heard you ask, unless you’ve already gone to the links and looked it up ;) The Solution Well, it turns out that the issue is founded in a mix of Silverlight, Asp.Net and WCF, basically if you’re doing multiple calls to a single WCF web-service and you have Asp.Net session state enabled, the calls will be executed sequentially by the service, hence any long running calls will block subsequent ones. So why is Asp.Net session state effecting us, we’re working in Silverlight, right? We'll as mentioned earlier, by default Silverlight uses the browsers networking stack when doing service calls, hence to the WCF service, the call looks like it might as well be coming from a normal Asp.Net. To get around this, we look to a feature introduced in Silverlight 3, namely the Client HTTP Stack. The Client HTTP Stack to the rescue By using the following syntax (for example in our App.xaml.cs, Application_Startup method) WebRequest.RegisterPrefix("http://", WebRequestCreator.ClientHttp); we can set our Silverlight application to use the Client HTTP Stack, which incidentally solves our problem! By using Silverlights own networking stack, rather than that of the browser, we get around the Asp.Net - WCF session state issue. The above code specifies that all calls to addresses starting with “http://” should go through the client stack, this can actually be set more granular and you can specify it to be used only for certain domains etc. Summary The actual solution is well covered in the forum and blog posts I link to above. This post is more about sharing my experience, hopefully helping to spread the word about this and maybe make it a bit easier for the next poor guy with this issue to find the solution. Until next time, Ola

    Read the article

  • Cannot access files after trying to upgrade Ubuntu

    - by Ola
    I tried to upgrade Ubuntu from 11.10 to 12.04. I left it for 24 hours but the upgrade did not complete. Hence I cancelled the upgrade. I thought I will copy all the files that I have to a DVD/CD and try try downloading a copy of Ubuntu. But now, I cannot open any file or copy them. I cannot even shutdown my laptop. I have many important files on my laptop. Can someone help me retrieve my files from my laptop? Regards Ola

    Read the article

  • Android MapView: Disable auto zoom

    - by Ola Andersson
    Hi. I have made an Android app that shows a MapView with two overlays, one MyLocationOverlay and one custom overlay. I am programmatically zooming and panning to what I want the map to show. It also auto pans to my current location. The auto pan is moving the map away from what I want to show. So my question is simply: How can I disable the auto pan? Thanks, Ola

    Read the article

  • Dynamic fields with Thinking Sphinx

    - by Ola Karlsson
    Hi! I'm building an application where I have products and categories. Category has_many properties and each property has a list of possible values. After a category is set to the product all properties show up in the form and the user can set that property to one of the properties possible values. My question is: Is it possible for Thinking Sphinx to filter the products through a property and property value ex: :with => {:property_id => property_value} If it's possible, what is the best way to implement this? If not is there any other library out there to solve this problem? Thanks / Ola

    Read the article

  • Intellitrace bug causes “Operation could destabilize the runtime” exception

    - by Magnus Karlsson
    We cant use it when we use simplemembership to handle external authorizations.   Server Error in '/' Application. Operation could destabilize the runtime. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Security.VerificationException: Operation could destabilize the runtime. Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [VerificationException: Operation could destabilize the runtime.] DotNetOpenAuth.OpenId.Messages.IndirectSignedResponse.GetSignedMessageParts(Channel channel) +943 DotNetOpenAuth.OpenId.ChannelElements.ExtensionsBindingElement.GetExtensionsDictionary(IProtocolMessage message, Boolean ignoreUnsigned) +282 DotNetOpenAuth.OpenId.ChannelElements.<GetExtensions>d__a.MoveNext() +279 DotNetOpenAuth.OpenId.ChannelElements.ExtensionsBindingElement.ProcessIncomingMessage(IProtocolMessage message) +594 DotNetOpenAuth.Messaging.Channel.ProcessIncomingMessage(IProtocolMessage message) +933 DotNetOpenAuth.OpenId.ChannelElements.OpenIdChannel.ProcessIncomingMessage(IProtocolMessage message) +326 DotNetOpenAuth.Messaging.Channel.ReadFromRequest(HttpRequestBase httpRequest) +1343 DotNetOpenAuth.OpenId.RelyingParty.OpenIdRelyingParty.GetResponse(HttpRequestBase httpRequestInfo) +241 DotNetOpenAuth.OpenId.RelyingParty.OpenIdRelyingParty.GetResponse() +361 DotNetOpenAuth.AspNet.Clients.OpenIdClient.VerifyAuthentication(HttpContextBase context) +136 DotNetOpenAuth.AspNet.OpenAuthSecurityManager.VerifyAuthentication(String returnUrl) +984 Microsoft.Web.WebPages.OAuth.OAuthWebSecurity.VerifyAuthenticationCore(HttpContextBase context, String returnUrl) +333 Microsoft.Web.WebPages.OAuth.OAuthWebSecurity.VerifyAuthentication(String returnUrl) +192 PrioMvcWebRole.Controllers.AccountController.ExternalLoginCallback(String returnUrl) in c:hiddenforyou lambda_method(Closure , ControllerBase , Object[] ) +127 System.Web.Mvc.ReflectedActionDescriptor.Execute(ControllerContext controllerContext, IDictionary`2 parameters) +250 System.Web.Mvc.ControllerActionInvoker.InvokeActionMethod(ControllerContext controllerContext, ActionDescriptor actionDescriptor, IDictionary`2 parameters) +39 System.Web.Mvc.Async.<>c__DisplayClass39.<BeginInvokeActionMethodWithFilters>b__33() +87 System.Web.Mvc.Async.<>c__DisplayClass4f.<InvokeActionMethodFilterAsynchronously>b__49() +439 System.Web.Mvc.Async.<>c__DisplayClass4f.<InvokeActionMethodFilterAsynchronously>b__49() +439 System.Web.Mvc.Async.<>c__DisplayClass37.<BeginInvokeActionMethodWithFilters>b__36(IAsyncResult asyncResult) +15 System.Web.Mvc.Async.<>c__DisplayClass2a.<BeginInvokeAction>b__20() +34 System.Web.Mvc.Async.<>c__DisplayClass25.<BeginInvokeAction>b__22(IAsyncResult asyncResult) +221 System.Web.Mvc.<>c__DisplayClass1d.<BeginExecuteCore>b__18(IAsyncResult asyncResult) +28 System.Web.Mvc.Async.<>c__DisplayClass4.<MakeVoidDelegate>b__3(IAsyncResult ar) +15 System.Web.Mvc.Controller.EndExecuteCore(IAsyncResult asyncResult) +42 System.Web.Mvc.Async.<>c__DisplayClass4.<MakeVoidDelegate>b__3(IAsyncResult ar) +15 System.Web.Mvc.<>c__DisplayClass8.<BeginProcessRequest>b__3(IAsyncResult asyncResult) +42 System.Web.Mvc.Async.<>c__DisplayClass4.<MakeVoidDelegate>b__3(IAsyncResult ar) +15 System.Web.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() +523 System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) +176 Version Information: Microsoft .NET Framework Version:4.0.30319; ASP.NET Version:4.0.30319.17929

    Read the article

  • How to use Azure storage for uploading and displaying pictures.

    - by Magnus Karlsson
    Basic set up of Azure storage for local development and production. This is a somewhat completion of the following guide from http://www.windowsazure.com/en-us/develop/net/how-to-guides/blob-storage/ that also involves a practical example that I believe is commonly used, i.e. upload and present an image from a user.   First we set up for local storage and then we configure for them to work on a web role. Steps: 1. Configure connection string locally. 2. Configure model, controllers and razor views.   1. Setup connectionsstring 1.1 Right click your web role and choose “Properties”. 1.2 Click Settings. 1.3 Add setting. 1.4 Name your setting. This will be the name of the connectionstring. 1.5 Click the ellipsis to the right. (the ellipsis appear when you mark the area. 1.6 The following window appears- Select “Windows Azure storage emulator” and click ok.   Now we have a connection string to use. To be able to use it we need to make sure we have windows azure tools for storage. 2.1 Click Tools –> Library Package manager –> Manage Nuget packages for solution. 2.2 This is what it looks like after it has been added.   Now on to what the code should look like. 3.1 First we need a view which collects images to upload. Here Index.cshtml. 1: @model List<string> 2:  3: @{ 4: ViewBag.Title = "Index"; 5: } 6:  7: <h2>Index</h2> 8: <form action="@Url.Action("Upload")" method="post" enctype="multipart/form-data"> 9:  10: <label for="file">Filename:</label> 11: <input type="file" name="file" id="file1" /> 12: <br /> 13: <label for="file">Filename:</label> 14: <input type="file" name="file" id="file2" /> 15: <br /> 16: <label for="file">Filename:</label> 17: <input type="file" name="file" id="file3" /> 18: <br /> 19: <label for="file">Filename:</label> 20: <input type="file" name="file" id="file4" /> 21: <br /> 22: <input type="submit" value="Submit" /> 23: 24: </form> 25:  26: @foreach (var item in Model) { 27:  28: <img src="@item" alt="Alternate text"/> 29: } 3.2 We need a controller to receive the post. Notice the “containername” string I send to the blobhandler. I use this as a folder for the pictures for each user. If this is not a requirement you could just call it container or anything with small characters directly when creating the container. 1: public ActionResult Upload(IEnumerable<HttpPostedFileBase> file) 2: { 3: BlobHandler bh = new BlobHandler("containername"); 4: bh.Upload(file); 5: var blobUris=bh.GetBlobs(); 6: 7: return RedirectToAction("Index",blobUris); 8: } 3.3 The handler model. I’ll let the comments speak for themselves. 1: public class BlobHandler 2: { 3: // Retrieve storage account from connection string. 4: CloudStorageAccount storageAccount = CloudStorageAccount.Parse( 5: CloudConfigurationManager.GetSetting("StorageConnectionString")); 6: 7: private string imageDirecoryUrl; 8: 9: /// <summary> 10: /// Receives the users Id for where the pictures are and creates 11: /// a blob storage with that name if it does not exist. 12: /// </summary> 13: /// <param name="imageDirecoryUrl"></param> 14: public BlobHandler(string imageDirecoryUrl) 15: { 16: this.imageDirecoryUrl = imageDirecoryUrl; 17: // Create the blob client. 18: CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient(); 19: 20: // Retrieve a reference to a container. 21: CloudBlobContainer container = blobClient.GetContainerReference(imageDirecoryUrl); 22: 23: // Create the container if it doesn't already exist. 24: container.CreateIfNotExists(); 25: 26: //Make available to everyone 27: container.SetPermissions( 28: new BlobContainerPermissions 29: { 30: PublicAccess = BlobContainerPublicAccessType.Blob 31: }); 32: } 33: 34: public void Upload(IEnumerable<HttpPostedFileBase> file) 35: { 36: // Create the blob client. 37: CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient(); 38: 39: // Retrieve a reference to a container. 40: CloudBlobContainer container = blobClient.GetContainerReference(imageDirecoryUrl); 41: 42: if (file != null) 43: { 44: foreach (var f in file) 45: { 46: if (f != null) 47: { 48: CloudBlockBlob blockBlob = container.GetBlockBlobReference(f.FileName); 49: blockBlob.UploadFromStream(f.InputStream); 50: } 51: } 52: } 53: } 54: 55: public List<string> GetBlobs() 56: { 57: // Create the blob client. 58: CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient(); 59: 60: // Retrieve reference to a previously created container. 61: CloudBlobContainer container = blobClient.GetContainerReference(imageDirecoryUrl); 62: 63: List<string> blobs = new List<string>(); 64: 65: // Loop over blobs within the container and output the URI to each of them 66: foreach (var blobItem in container.ListBlobs()) 67: blobs.Add(blobItem.Uri.ToString()); 68: 69: return blobs; 70: } 71: } 3.4 So, when the files have been uploaded we will get them to present them to out user in the index page. Pretty straight forward. In this example we only present the image by sending the Uri’s to the view. A better way would be to save them up in a view model containing URI, metadata, alternate text, and other relevant information but for this example this is all we need.   4. Now press F5 in your solution to try it out. You can see the storage emulator UI here:     4.1 If you get any exceptions or errors I suggest to first check if the service Is running correctly. I had problem with this and they seemed related to the installation and a reboot fixed my problems.     5. Set up for Cloud storage. To do this we need to add configuration for cloud just as we did for local in step one. 5.1 We need our keys to do this. Go to the windows Azure menagement portal, select storage icon to the right and click “Manage keys”. (Image from a different blog post though).   5.2 Do as in step 1.but replace step 1.6 with: 1.6 Choose “Manually entered credentials”. Enter your account name. 1.7 Paste your Account Key from step 5.1. and click ok.   5.3. Save, publish and run! Please feel free to ask any questions using the comments form at the bottom of this page. I will get back to you to help you solve any questions. Our consultancy agency also provides services in the Nordic regions if you would like any further support.

    Read the article

  • Connect to localdb using Sql Server management studio

    - by Magnus Karlsson
    I was trying to find my databse for local db under localhost etc but no luck. The following led me to just connect to it, kind of obvious really when you look at your connections string but.. its sunday morning or something.. From: http://blogs.msdn.com/b/sqlexpress/archive/2011/07/12/introducing-localdb-a-better-sql-express.aspx High-Level Overview After the lengthy introduction it's time to take a look at LocalDB from the technical side. At a very high level, LocalDB has the following key properties: LocalDB uses the same sqlservr.exe as the regular SQL Express and other editions of SQL Server. The application is using the same client-side providers (ADO.NET, ODBC, PDO and others) to connect to it and operates on data using the same T-SQL language as provided by SQL Express. LocalDB is installed once on a machine (per major SQL Server version). Multiple applications can start multiple LocalDB processes, but they are all started from the same sqlservr.exe executable file from the same disk location. LocalDB doesn't create any database services; LocalDB processes are started and stopped automatically when needed. The application is just connecting to "Data Source=(localdb)\v11.0" and LocalDB process is started as a child process of the application. A few minutes after the last connection to this process is closed the process shuts down. LocalDB connections support AttachDbFileName property, which allows developers to specify a database file location. LocalDB will attach the specified database file and the connection will be made to it.

    Read the article

  • How do I enable sound with the "linux-virtual" kernel?

    - by Ola Tuvesson
    I've been trying to enable sound for the linux-virtual kernel as I want to run an ultra slim Ubuntu server under VirtualBox but need audio. The resource usage difference between virtual and generic/server is surprisingly large, with the virtual kernel system using 80Mb less RAM after a clean boot (130Mb vs 210Mb), and I really want to squeeze every clock cycle and available byte I can out of the system. Besides, the virtual kernel has some additional optimisations enabled specifically for virtual machines (or so I am told). Now I have compiled my own kernel a few times in the past, for example to include the Intel-PHC module (for improved power management on Thinkpads), so the concept is not entirely alien to me, but I've run into a strange problem which I'm hoping someone can help explain: When I do a diff between the config files for Linux-generic and Linux-virtual there are precious few differences, and certainly none which pertain to sound support; there are really only five or six lines which differ, and they're mainly to do with i/o timing, sleep state and priorities. What gives? I expected the differences to be extensive, and that I would be able to identify the options that enabled audio by looking at them, but my problem doesn't seem to be related to the config file at all (yes, I know about the sound drivers section - it is identical between the two kernel configs). Am I looking in the wrong place? Many thanks!

    Read the article

  • How to avoid big and clumpsy UITableViewController on iOS?

    - by Johan Karlsson
    I have a problem when implementing the MVC-pattern on iOS. I have searched the Internet but seems not to find any nice solution to this problem. Many UITableViewController implementations seems to be rather big. Most example I have seen lets the UITableViewController implement UITableViewDelegate and UITableViewDataSource. These implementations are a big reason why UITableViewControlleris getting big. One solution would be to create separate classes that implements UITableViewDelegate and UITableViewDataSource. Of course these classes would have to have a reference to the UITableViewController. Are there any drawbacks using this solution? In general I think you should delegate the functionality to other "Helper" classes or similar, using the delegate pattern. Are there any well established ways of solving this problem? I do not want the model to contain to much functionality, nor the view. A believe that the logic should really be in the controller class, since this is one of the cornerstones of the MVC-pattern. But the big question is; How should you divide the controller of a MVC-implementation into smaller manageable pieces? (Applies to MVC in iOS in this case) There might be a general pattern for solving this, although I am specifically looking for a solution for iOS. Please give an example of a good pattern for solving this issue. Also an argument why this solution is awesome.

    Read the article

  • Connect to running web role on Azure using Remote Desktop Connection and VS2012

    - by Magnus Karlsson
    We want to be able to collect IntelliTrace information from our running app and also use remote desktop to connect to the IIS and look around(probably debugging). 1. Create certificate 1.1 Right-click the cloud project (marked in red) and select “Configure remote desktop”. 1.2 In the drop down list of certificates, choose <create> at the bottom. 1.3. Follow the instructions, you can set it up with default values. 1.4 When done. Choose the certificate and click “Copy to File…” as seen in the left of the picture above. 1.5. Save the file with any name you want. Now we will save it to local storage to be able to import it to our solution through the azure configuration manager in step 3. 2. Save certificate to local storage Now we need to attach it to our local certificate storage to be able to reach it from our confiuguration manager in visual studio. Microsoft provides the following steps for doing this: http://support.microsoft.com/kb/232137 In order to view the Certificates store on the local computer, perform the following steps: Click Start, and then click Run. Type "MMC.EXE" (without the quotation marks) and click OK. Click Console in the new MMC you created, and then click Add/Remove Snap-in. In the new window, click Add. Highlight the Certificates snap-in, and then click Add. Choose the Computer option and click Next. Select Local Computer on the next screen, and then click OK. Click Close , and then click OK. You have now added the Certificates snap-in, which will allow you to work with any certificates in your computer's certificate store. You may want to save this MMC for later use. Now that you have access to the Certificates snap-in, you can import the server certificate into you computer's certificate store by following these steps: Open the Certificates (Local Computer) snap-in and navigate to Personal, and then Certificates. Note: Certificates may not be listed. If it is not, that is because there are no certificates installed. Right-click Certificates (or Personal if that option does not exist.) Choose All Tasks, and then click Import. When the wizard starts, click Next. Browse to the PFX file you created containing your server certificate and private key. Click Next. Enter the password you gave the PFX file when you created it. Be sure the Mark the key as exportable option is selected if you want to be able to export the key pair again from this computer. As an added security measure, you may want to leave this option unchecked to ensure that no one can make a backup of your private key. Click Next, and then choose the Certificate Store you want to save the certificate to. You should select Personal because it is a Web server certificate. If you included the certificates in the certification hierarchy, it will also be added to this store. Click Next. You should see a summary of screen showing what the wizard is about to do. If this information is correct, click Finish. You will now see the server certificate for your Web server in the list of Personal Certificates. It will be denoted by the common name of the server (found in the subject section of the certificate). Now that you have the certificate backup imported into the certificate store, you can enable Internet Information Services 5.0 to use that certificate (and the corresponding private key). To do this, perform the following steps: Open the Internet Services Manager (under Administrative Tools) and navigate to the Web site you want to enable secure communications (SSL/TLS) on. Right-click on the site and click Properties. You should now see the properties screen for the Web site. Click the Directory Security tab. Under the Secure Communications section, click Server Certificate. This will start the Web Site Certificate Wizard. Click Next. Choose the Assign an existing certificate option and click Next. You will now see a screen showing that contents of your computer's personal certificate store. Highlight your Web server certificate (denoted by the common name), and then click Next. You will now see a summary screen showing you all the details about the certificate you are installing. Be sure that this information is correct or you may have problems using SSL or TLS in HTTP communications. Click Next, and then click OK to exit the wizard. You should now have an SSL/TLS-enabled Web server. Be sure to protect your PFX files from any unwanted personnel. Image of a typical MMC.EXE with the certificates up.   3. Import the certificate to you visual studio project. 3.1 Now right click your equivalent to the MvcWebRole1 (as seen in the first picture under the red oval) and choose properties. 3.2 Choose Certificates. Right click the ellipsis to the right of the “thumbprint” and you should be able to select your newly created certificate here. After selecting it- save the file.   4. Upload the certificate to your Azure subscription. 4.1 Go to the azure management portal, click the services menu icon to the left and choose the service. Click Upload in the bottom menu.     5. Connect to server. Since I tried to use account settings(have to use another name) we have to set up a new name for the connection. No biggie. 5.1 Go to azure management portal, select your service and in the bottom menu, choose “REMOTE”. This will display the configuration for remote connection. It will actually change your ServiceConfiguration.cscfg file. After you change It here it might be good to choose download and replace the one in your project. Set a name that is not your windows azure account name and not Administrator. 5.2 Goto visual studio, click Server Explorer. Choose as selected in the picture below and click “COnnect using remote desktop”.   5.2 You will now be able to log in with the name and password set up in step 5.1. and voila! Windows server 2012, IIS and other nice stuff!   To do this one I’ve been using http://msdn.microsoft.com/en-us/library/windowsazure/ff683671.aspx where you can collect some of this information and additional one.

    Read the article

  • How to avoid big and clumsy UITableViewController on iOS?

    - by Johan Karlsson
    I have a problem when implementing the MVC-pattern on iOS. I have searched the Internet but seems not to find any nice solution to this problem. Many UITableViewController implementations seems to be rather big. Most examples I have seen lets the UITableViewController implement <UITableViewDelegate> and <UITableViewDataSource>. These implementations are a big reason why UITableViewControlleris getting big. One solution would be to create separate classes that implements <UITableViewDelegate> and <UITableViewDataSource>. Of course these classes would have to have a reference to the UITableViewController. Are there any drawbacks using this solution? In general I think you should delegate the functionality to other "Helper" classes or similar, using the delegate pattern. Are there any well established ways of solving this problem? I do not want the model to contain too much functionality, nor the view. I believe that the logic should really be in the controller class, since this is one of the cornerstones of the MVC-pattern. But the big question is: How should you divide the controller of a MVC-implementation into smaller manageable pieces? (Applies to MVC in iOS in this case) There might be a general pattern for solving this, although I am specifically looking for a solution for iOS. Please give an example of a good pattern for solving this issue. Please provide an argument why your solution is awesome.

    Read the article

  • Ubuntu doesn't see my phone (Sony Xperia Tipo)

    - by ola
    When I connect my phone Xperia Tipo to my Ubuntu 12.04, the USB icon does not appear in a launcher. lsusb gives me following results: Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 001 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 002 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 001 Device 003: ID 8086:0189 Intel Corp. Bus 001 Device 004: ID 1bcf:2880 Sunplus Innovation Technology Inc. Bus 003 Device 003: ID 0fce:5170 Sony Ericsson Mobile Communications AB I installed Wammu and it does not see phone either: Wammu is now searching for phone: All finished, found 0 phones No phone has been found! On the phone I have debbugging turned on and it sees Ubuntu. Can I ask for instruction step by step? I am a begginer on Ubuntu.

    Read the article

  • Making an Ajax request to a page method in ASP.NET MVC 2

    - by JLago
    I'm trying to call a page method belonging to a MVC Controller from another site, by means of: $.ajax({ type: "GET", url: "http://localhost:54953/Home/ola", data: "", contentType: "application/json; charset=utf-8", dataType: "json", success: function(data) { console.log(data.Name); } }); the method code is as follows, really simple, just to test: public ActionResult ola() { return Json(new ActionInfo() { Name = "ola" },JsonRequestBehavior.AllowGet); } I've seen this aproach being suggested here, and I actually like it a lot, should it work... When I run this, firebug gets a 200 OK, but the data received is null. I've tried a lot of different approaches, like having the data in text (wish grants me "(an empty string)" instead of just "null") or returning string in the server method... Can you tell me what am I doing wrong? Thank you in advance, João

    Read the article

  • IIS 7 returns 304 instead of 200

    - by Ola Herrdahl
    I have a strange issue with IIS 7. Sometimes it seems to return a 304 instead of a 200. Here is a sample request captured with Fiddler: (Note that the file requested is not located in my browsers cache yet.) GET https://[mysite]/Content/js/jquery.form.js HTTP/1.1 Accept: */* Referer: https://[mysite]/Welcome/News Accept-Language: sv-SE User-Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; WOW64; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; InfoPath.2; OfficeLiveConnector.1.4; OfficeLivePatch.1.3; .NET4.0C; .NET4.0E) Accept-Encoding: gzip, deflate Host: [mysite] Connection: Keep-Alive Cache-Control: no-cache Cookie: ... Note that there is no If-Modified-Since or If-None-Match in the request. But still the response is: HTTP/1.1 304 Not Modified Cache-Control: public Expires: Tue, 02 Mar 2010 06:26:08 GMT Last-Modified: Mon, 22 Feb 2010 21:58:44 GMT ETag: "1CAB40A337D4200" Server: Microsoft-IIS/7.5 X-Powered-By: ASP.NET Date: Mon, 01 Mar 2010 17:06:34 GMT Does anyone have a clue of what could be wrong here? I'm running IIS 7 on Windows Web Server 2008 R2.

    Read the article

  • Rsync from godaddy to OS X

    - by Ola
    I would like to use rsync to backup my website to my local computer (OS X). I started of with this guide and got pretty far. I use the following rsync-line: rsync -PzrlptgD --del --delete-excluded -r --rsync-path=~/bin/rsync user@server:~/ /local/backup/folder/ I wanted to use the -a option (same as rlptgoD) but it crashes as soon as I use the -o flag. receiving file list ... rsync: connection unexpectedly closed (8 bytes received so far) [receiver] rsync error: unexplained error (code 255) at /SourceCache/rsync/rsync-42/rsync/io.c(452) [receiver=2.6.9] If I skip the --owner flag it copies the files but I'm not really sure what difference it makes (I've tried to read up on it but found nothing) Should I just skip using the --owner flag? Or have I done any other mistake? Thanks in advance //OL

    Read the article

  • Permission denied when running Rails app in VirtualBox Ubuntu guest with files on Windows host

    - by Ola Tuvesson
    I think I'm close to having my dev environment set up exactly the way I want, but one final snag remains. I'm running VirtualBox on a Windows 7 64bit host, with my dev enviroment inside a Ubuntu 12.04 guest. I want to keep the files for my projects on the host filesystem - partly so I can access them when the Ubuntu guest is not running, but also so I can use Tortoise and other Windows based tools (cough Photoshop), and it also eases my backup scheme somewhat. So I've got a folder "Rails" on my NTFS drive, which I've shared (Samba) from the host with a user specifically created for the Ubuntu guest. The mount point has been set up and an entry added to fstab (cifs), using a credentials file and the options iocharset=utf8,mode=0777,dir_mode=07??77 This mounts fine and my Ubuntu user has both read and write permissions to the contents. But when I try to start my Rails app I get permission errors on any files the app needs to write to (e.g. the log file) - why is that? Are there any major conceptual flaws with this approach? Would I be better off using the VBox "shared folders" function?

    Read the article

  • Cannot remove wireless network profile even though admin account is used

    - by David Karlsson
    On my windows computer which is connected to a company domain. I have problems with the wireless networking. First of all the computer fails to connect. Second of all I cannot remove the network from the list of wireless profiles. The properties window simply claims that "This network is administered by the Administartor Account". I am currently logged in as the local administrator. I have also tried creating a new admin account and still get the same problem when trying to remove the network. My computer has only the microsoft security essentials antivirus and some VMWare+virtual box connections that I can figure might interfere, but disabling realtime protection has not helped me on this. I also cannot delete the virtual network adapters from the control panel / network adapters...

    Read the article

  • Com port redirection from Windows 7 to Windows Server 2008 R2

    - by Ola Eldøy
    We use "Copy file.prn to \tsclient\com1" to print from a TS session to a locally attached serial printer. This works fine from Windows XP, but when trying it from a Windows 7 client computer, we get an "Access is denied" error message. And yes, the check box of COM port is selected on the Local Resources tab of the Remote Desktop Connection client. Any pointers? Has anyone even managed to do this successfully?

    Read the article

  • Permission denied problem in Freenas + Transmission

    - by Torbjörn Karlsson
    Running Freenas 0.7.2 (5543) and Transmission 2.11 The problem it that i can not save a torrent where ever i want.. For example... I can save in: /nmt/1-500gb/Tv/dexter but i can not save in /nmt/4-1000gb/tv/Lost When i try to save in the lost folder I get a permission denied error in the Web interface. But when I try to save the same torrent file in the dexter folder everything works fine... This is probably an easy thing to fix, but I'm new to Freenas. The user name for Transmission is TorrentUser if that helps. Now I find out that I can not browse the disk in Quixplorer.. I can browse nmt/4-1000gb/ but not /nmt/1-500gb When I try to browse the nmt/4-1000gb/ I get Unable to read directory $ mount /dev/md0 on / (ufs, local) devfs on /dev (devfs, local) procfs on /proc (procfs, local) /dev/fuse1 on /mnt/5 - 500gb (fusefs, local, synchronous) /dev/fuse2 on /mnt/2 - 1000gb (fusefs, local, synchronous) /dev/fuse3 on /mnt/3 - 1000gb (fusefs, local, synchronous) /dev/fuse4 on /mnt/4 - 1000gb (fusefs, local, synchronous) /dev/fuse5 on /mnt/320GB - USB (fusefs, local, synchronous) /dev/md1 on /var (ufs, local) /dev/da0a on /cf (ufs, local, read-only) /dev/fuse0 on /mnt/1 - 500gb (fusefs, local, synchronous) Dont work : 1 - 500gb 2 - 1000gb 3 - 1000gb Works: 320GB - USB 4 - 1000gb 5 - 500gb And this 3 disk is the same disks that I can save my torrents to. Ps. Every disk works perfect when i use ftp...

    Read the article

  • Write permission when mounting Windows shares from Ubuntu

    - by Ola Tuvesson
    I think I'm close to having my dev environment set up exactly the way I want, but one final snag remains. I'm running VirtualBox on a Windows 7 64bit host, with my dev enviroment inside a Ubuntu 12.04 guest. I want to keep the files for my projects on the host filesystem - partly so I can access them when the Ubuntu guest is not running, but also so I can use Tortoise and other Windows based tools (cough Photoshop), and it also eases my backup scheme somewhat. So I've got a folder "Rails" on my NTFS drive, which I've shared from the host with a user specifically created for the Ubuntu guest. The mount point has been set up and an entry added to fstab (cifs), using a credentials file and the options iocharset=utf8,file_mode=0777,dir_mode=07??77 This mounts fine and my Ubuntu user has both read and write permissions to the contents, but when I try to start my Rails app I get permission errors on any files the app needs to write to (e.g. the log file). What gives?

    Read the article

1 2 3  | Next Page >