Search Results

Search found 24674 results on 987 pages for 'visual studio 2005 tips'.

Page 707/987 | < Previous Page | 703 704 705 706 707 708 709 710 711 712 713 714  | Next Page >

  • What are the pre-requisites for writing .NET web services?

    - by wackytacky99
    I am very new to web development. I have been a C,C++ programmer for 5 years and I'm starting to get into the web development, writing web services, etc. I understand that basic concepts of web services. I know .Net web services can be written in VB or C#. Working with C,C++ will help getting used to writing code in C#. I do not have experience in .Net framework. I'd like to quickly get into writing .Net web services and learning on the go, without extensively spending a lot of time learning .Net framework (if possible) Any suggestions? Update - I know my way around databases and sql express in Visual Studio

    Read the article

  • Imaginet Resources acquires Notion Solutions

    - by Aaron Kowall
    Huge news for my company and me especially. http://www.imaginets.com/news--events/imaginet_acquisition_notion.html With the acquisition we become a very significant player in the Microsoft ALM space.  This increases our scale significantly and also our knowledgebase.  We now have a 2 Regional Directors and a pile of MS MVP’s. The timing couldn’t be more perfect since the launch of Visual Studio 2010 and Team Foundation Server 2010 is TODAY!! Oh, and we aren’t done with announcements today… More later. Technorati Tags: VS 2010,TFS 2010,Notion,Imaginet

    Read the article

  • Free Version of Oracle Application Development Framework (Oracle ADF)

    - by Steve Muench
    I'm very happy to finally be able to talk about this. A long time coming, the press release is finally out: Oracle Introduces Free Version of Oracle Application Development Framework New Oracle ADF Essentials Brings ADF Benefits to the Broader Developer Community Oracle ADF Essentials is a free packaging of core technologies from the Oracle Application Development Framework that can be used to develop and deploy applications that include ADF Business Components, ADF Controller, ADF Binding, and ADF Faces Rich Client Components without incurring licensing costs. Both Oracle JDeveloper and Oracle Enterprise Pack for Eclipse provide visual and declarative development experience for using it. Oracle ADF Essentials comes with specific instructions and certification for deploying applications on the open-source Glassfish server, but the license is not limited to that server. For more information and to download it (it's only 20MB), see Oracle ADF Essentials page on OTN.

    Read the article

  • Creating and Using a jQuery Plug-in in ASP.NET Web Forms

    - by bipinjoshi
    Developers often resort to code reuse techniques in their projects. As far as ASP.NET framework server side programming is concerned classes, class libraries, components, custom server controls and user controls are popular code reuse techniques. Modern ASP.NET web applications no longer restrict themselves only to server side programming. They also make use of client side scripting to render rich web forms. No wonder that Microsoft Visual Studio 2010 includes jQuery library by default as a part of newly created web site. If you are using jQuery for client side scripting then one way to reuse your client side code is to create a jQuery plug-in. Creating a plug-in allows you to bundle your reusable jQuery code in a neat way and then reuse it across web forms. In this article you will learn how to create a simple jQuery plug-in from scratch. You will also learn about certain guidelines that help you build professional jQuery plug-ins.http://www.bipinjoshi.net/articles/aae84a03-b4a8-477d-b087-5b7f42935220.aspx 

    Read the article

  • Compiz effects aren't working properly!

    - by Naveen
    On Ubuntu 12.04, the Compiz visual effects appears to be glitchy. I see a "window flash", after rotating the "Desktop Cube". I've used the "Magic Lamp effect" when minimizing windows, but the window content, minimizes faster, delaying the window border to minimize. "So the border and the window content gets apart, like in the picture below". I've tried updating Compiz, but no luck. How to fix this problem? Will Compiz work properly in XFCE? This didn't happen in Lucid Lynx. I appriciate any help!

    Read the article

  • Writing an ASP.Net Web based TFS Client

    - by Glav
    So one of the things I needed to do was write an ASP.Net MVC based application for our senior execs to manage a set of arbitrary attributes against stories, bugs etc to be able to attribute whether the item was related to Research and Development, and if so, what kind. We are using TFS Azure and don’t have the option of custom templates. I have decided on using a string based field within the template that is not very visible and which we don’t use to write a small set of custom which will determine the research and development association. However, this string munging on the field is not very user friendly so we need a simple tool that can display attributes against items in a simple dropdown list or something similar. Enter a custom web app that accesses our TFS items in Azure (Note: We are also using Visual Studio 2012) Now TFS Azure uses your Live ID and it is not really possible to easily do this in a server based app where no interaction is available. Even if you capture the Live ID credentials yourself and try to submit them to TFS Azure, it wont work. Bottom line is that it is not straightforward nor obvious what you have to do. In fact, it is a real pain to find and there are some answers out there which don’t appear to be answers at all given they didn’t work in my scenario. So for anyone else who wants to do this, here is a simple breakdown on what you have to do: Go here and get the “TFS Service Credential Viewer”. Install it, run it and connect to your TFS instance in azure and create a service account. Note the username and password exactly as it presents it to you. This is the magic identity that will allow unattended, programmatic access. Without this step, don’t bother trying to do anything else. In your MVC app, reference the following assemblies from “C:\Program Files (x86)\Microsoft Visual Studio 11.0\Common7\IDE\ReferenceAssemblies\v2.0”: Microsoft.TeamFoundation.Client.dll Microsoft.TeamFoundation.Common.dll Microsoft.TeamFoundation.VersionControl.Client.dll Microsoft.TeamFoundation.VersionControl.Common.dll Microsoft.TeamFoundation.WorkItemTracking.Client.DataStoreLoader.dll Microsoft.TeamFoundation.WorkItemTracking.Client.dll Microsoft.TeamFoundation.WorkItemTracking.Common.dll If hosting this in Internet Information Server, for the application pool this app runs under, you will need to enable 32 Bit support. You also have to allow the TFS client assemblies to store a cache of files on your system. If you don’t do this, you will authenticate fine, but then get an exception saying that it is unable to access the cache at some directory path when you query work items. You can set this up by adding the following to your web.config, in the <appSettings> element as shown below: <appSettings> <!-- Add reference to TFS Client Cache --> <add key="WorkItemTrackingCacheRoot" value="C:\windows\temp" /> </appSettings> With all that in place, you can write the following code: var token = new Microsoft.TeamFoundation.Client.SimpleWebTokenCredential("{you-service-account-name", "{your-service-acct-password}"); var clientCreds = new Microsoft.TeamFoundation.Client.TfsClientCredentials(token); var currentCollection = new TfsTeamProjectCollection(new Uri(“https://{yourdomain}.visualstudio.com/defaultcollection”), clientCreds); TfsConfigurationServercurrentCollection.EnsureAuthenticated(); In the above code, not the URL contains the “defaultcollection” at the end of the URL. Obviously replace {yourdomain} with whatever is defined for your TFS in Azure instance. In addition, make sure the service user account and password that was generated in the first step is substituted in here. Note: If something is not right, the “EnsureAuthenticated()” call will throw an exception with the message being you are not authorised. If you forget the “defaultcollection” on the URL, it will still fail but with a message saying you are not authorised. That is, a similar but different exception message. And that is it. You can then query the collection using something like: var service = currentCollection.GetService<WorkItemStore>(); var proj = service.Projects[0]; var allQueries = proj.StoredQueries; for (int qcnt = 0; qcnt < allQueries.Count; qcnt++) {     var query = allQueries[qcnt];     var queryDesc = string.format(“Query found named: {0}”,query.Name); } You get the idea. If you search around, you will find references to the ServiceIdentityCredentialProvider which is referenced in this article. I had no luck with this method and it all looked too hard since it required an extra KB article and other magic sauce. So I hope that helps. This article certainly would have helped me save a boat load of time and frustration.

    Read the article

  • What are the 'must know' GDB commands?

    - by Chris Smith
    I'm starting to get the hang of GDB, but everything still feels much slower than when debugging in Eclipse or Visual Studio. Are there any GDB commands you find particularly useful/productive? My life became dramatically better when I discovered: list - Display source code near the current instruction But that is still pretty basic. (And unnecessary when running GDB from Emacs.) Is there any way to do things like setup a watch window? (Print and update the result of an expression every time execution stops.)

    Read the article

  • How to create a bootable Clonezilla usb with Tuxboot that work?

    - by Feanor
    I'm trying to create a bootable clonezilla usb with tuxboot, the application that is recommended by clonezilla site. I installed it via Ubuntu PPA and follow the instructions on the site to put files on usb. Everything went well and then I restarted the system. Now when I'm trying to boot from usb it says: "This is not a bootable disk. Please insert a bootable floppy and press any key to try again ..." What is causing this problem? I really appreciate any help you can provide. My laptop model is Dell studio-1558 and I'm running Ubuntu 14.04

    Read the article

  • Oracle University Nouveaux cours (Week 42)

    - by rituchhibber
    Nouveaux cours Parmi les nouveautés d'Oracle Université de ce mois-ci, vous trouverez : Database Oracle Enterprise Manager Cloud Control 12c: Install & Upgrade (Training On Demand) MySQL Performance Tuning (Training On Demand) Fusion Middleware Oracle GoldenGate 11g Fundamentals for Oracle (4 days) Oracle WebCenter Content 11g: Site Studio Essentials (5 days) Oracle WebCenter Portal 11g: Build Portals with Spaces (3 days) Business Intelligence Oracle BI 11g R1: Create Analyses and Dashboards (4 days) SOA & BPM SOA Adoption and Architecture Fundamentals (3 Days) eBusiness Suite R12 Oracle Using and Maintaining Approvals Management - Self-Study Course R12 Oracle HRMS Advanced Benefits Fundamentals - Self-Study Course WebLogic Oracle WebLogic Server 11g: Monitor and Tune Performance (Training On Demand) Oracle WebLogic Server 11g: Administration Essentials Self-Study Course (in French) Financial Oracle Project Financial Planning 11.1.2: Create Projects ( 3 days) Tuxedo Oracle Tuxedo 12c: Application Administration (5 days) Java Java SE 7: The Platform Evolves - Self-Study Course Primevera Primavera Client/Server Partner Trainer Course - Self-Study Course Primavera Progress Reporter 8.2 - Self-Study Course Contacter l' équipe locale d' Oracle University pour toute information et dates de cours.

    Read the article

  • Why is using C++ libraries so complicated?

    - by Pius
    First of all, I want to note I love C++ and I'm one of those people who thinks it is easier to code in C++ than Java. Except for one tiny thing: libraries. In Java you can simply add some jar to the build path and you're done. In C++ you usually have to set multiple paths for the header files and the library itself. In some cases, you even have to use special build flags. I have mainly used Visual Studio, Code Blocks and no IDE at all. All 3 options do not differ much when talking about using external libraries. I wonder why was there made no simpler alternative for this? Like having a special .zip file that has everything you need in one place so the IDE can do all the work for you setting up the build flags. Is there any technical barrier for this?

    Read the article

  • On-the-fly graphical representation of code

    - by dukeofgaming
    I know about Omondo's plugin for live code-UML synchronization in Eclipse, but I was wondering if there was any other tool/IDE/IDE-extension that has some form of live graphical code representaiton (structural, flow, call-stacks, dependencies, etc.). I'm essentially looking for richer visual feedback on code while programming, not really looking for purely graphical code editors, though round-trips would be nice (edit graphically, code gets modified; edit code, representation gets modified). If you don't know about any graphical live documentation tool for code, maybe someone that can coexist with code, such as MySQL Workbench or Enterprise Architect.

    Read the article

  • Best Way To Develop Robust Cross-Platform Application?

    - by Clay
    Windows C programmer here (going back to 1992 and Windows95 back when it was called Windows93). Can function in C++, but mostly still a C programmer. Looking to build a cross-platform casual game. Very numbers heavy with only a few artistic embellishments and animations, so perhaps a development environment for business apps might be the best option. Or an easy-to-use 2D game dev platform. Target platforms: Windows, Mac, MS Tablet, iPhone, iPad, Android. I currently develop on Windows with Visual Studio 2012, but we could spend up to $50K on hardware/software/middleware if necessary. Not very competent getting open-source software working. Would rather pay the money and jump right into app development. Recommendations?

    Read the article

  • Philly GiveCamp 2010

    - by wulfers
    Spent the weekend helping out several non-profits doing what we like to do best...  Designing, developing and making people very happy with their new websites, systems, applications and features.  Form what I saw at this GiveCamp about 75 percent of the non-profits needed updated or new websites supporting CMS features and the ability for staff members to update the content on their websites.... Some cool apps were designed and developed..... A centralized system for distribution of daily schedules and task assistance to autistic clients was a show stopper with an awesome central management interface, IPhone and in the future windows phone support for tactile, auditory and visual cues. SharePoint was upgraded and forms were automated for a volunteer fire company that desperately needed some automation to help the fireman do their primary job. Many cool sites for non-profits that had ether an outdated or non existent web presence.

    Read the article

  • Webinar Recording on Cross Platform Development with MonoTouch and Mono for Android

    - by Wallym
    The iPhone and Android are dominant in the marketplace. The two platforms currently have 85% of the smartphone marketplace and are continuing to grow that marketshare. Developers are being tasked with targeting these two platforms. In this session, we’ll take a high level look at how we can use c# and .NET knowledge to share code between iOS and and Android. We’ll look at linked files, using the Xamarin Mobile API, the challenges of running across platforms and frameworks, as well as other features of Visual Studio, Monotouch, MonoDevelop, and Mono for Android that allows us to write as much code that can run on both platforms.The following link is a recording on Cross Platform Development with MonoTouch and Mono for Android. I am guessing that the link only works in IE. That's out of my control.

    Read the article

  • I see no LOBs!

    - by Paul White
    Is it possible to see LOB (large object) logical reads from STATISTICS IO output on a table with no LOB columns? I was asked this question today by someone who had spent a good fraction of their afternoon trying to work out why this was occurring – even going so far as to re-run DBCC CHECKDB to see if any corruption had taken place.  The table in question wasn’t particularly pretty – it had grown somewhat organically over time, with new columns being added every so often as the need arose.  Nevertheless, it remained a simple structure with no LOB columns – no TEXT or IMAGE, no XML, no MAX types – nothing aside from ordinary INT, MONEY, VARCHAR, and DATETIME types.  To add to the air of mystery, not every query that ran against the table would report LOB logical reads – just sometimes – but when it did, the query often took much longer to execute. Ok, enough of the pre-amble.  I can’t reproduce the exact structure here, but the following script creates a table that will serve to demonstrate the effect: IF OBJECT_ID(N'dbo.Test', N'U') IS NOT NULL DROP TABLE dbo.Test GO CREATE TABLE dbo.Test ( row_id NUMERIC IDENTITY NOT NULL,   col01 NVARCHAR(450) NOT NULL, col02 NVARCHAR(450) NOT NULL, col03 NVARCHAR(450) NOT NULL, col04 NVARCHAR(450) NOT NULL, col05 NVARCHAR(450) NOT NULL, col06 NVARCHAR(450) NOT NULL, col07 NVARCHAR(450) NOT NULL, col08 NVARCHAR(450) NOT NULL, col09 NVARCHAR(450) NOT NULL, col10 NVARCHAR(450) NOT NULL, CONSTRAINT [PK dbo.Test row_id] PRIMARY KEY CLUSTERED (row_id) ) ; The next script loads the ten variable-length character columns with one-character strings in the first row, two-character strings in the second row, and so on down to the 450th row: WITH Numbers AS ( -- Generates numbers 1 - 450 inclusive SELECT TOP (450) n = ROW_NUMBER() OVER (ORDER BY (SELECT 0)) FROM master.sys.columns C1, master.sys.columns C2, master.sys.columns C3 ORDER BY n ASC ) INSERT dbo.Test WITH (TABLOCKX) SELECT REPLICATE(N'A', N.n), REPLICATE(N'B', N.n), REPLICATE(N'C', N.n), REPLICATE(N'D', N.n), REPLICATE(N'E', N.n), REPLICATE(N'F', N.n), REPLICATE(N'G', N.n), REPLICATE(N'H', N.n), REPLICATE(N'I', N.n), REPLICATE(N'J', N.n) FROM Numbers AS N ORDER BY N.n ASC ; Once those two scripts have run, the table contains 450 rows and 10 columns of data like this: Most of the time, when we query data from this table, we don’t see any LOB logical reads, for example: -- Find the maximum length of the data in -- column 5 for a range of rows SELECT result = MAX(DATALENGTH(T.col05)) FROM dbo.Test AS T WHERE row_id BETWEEN 50 AND 100 ; But with a different query… -- Read all the data in column 1 SELECT result = MAX(DATALENGTH(T.col01)) FROM dbo.Test AS T ; …suddenly we have 49 LOB logical reads, as well as the ‘normal’ logical reads we would expect. The Explanation If we had tried to create this table in SQL Server 2000, we would have received a warning message to say that future INSERT or UPDATE operations on the table might fail if the resulting row exceeded the in-row storage limit of 8060 bytes.  If we needed to store more data than would fit in an 8060 byte row (including internal overhead) we had to use a LOB column – TEXT, NTEXT, or IMAGE.  These special data types store the large data values in a separate structure, with just a small pointer left in the original row. Row Overflow SQL Server 2005 introduced a feature called row overflow, which allows one or more variable-length columns in a row to move to off-row storage if the data in a particular row would otherwise exceed 8060 bytes.  You no longer receive a warning when creating (or altering) a table that might need more than 8060 bytes of in-row storage; if SQL Server finds that it can no longer fit a variable-length column in a particular row, it will silently move one or more of these columns off the row into a separate allocation unit. Only variable-length columns can be moved in this way (for example the (N)VARCHAR, VARBINARY, and SQL_VARIANT types).  Fixed-length columns (like INTEGER and DATETIME for example) never move into ‘row overflow’ storage.  The decision to move a column off-row is done on a row-by-row basis – so data in a particular column might be stored in-row for some table records, and off-row for others. In general, if SQL Server finds that it needs to move a column into row-overflow storage, it moves the largest variable-length column record for that row.  Note that in the case of an UPDATE statement that results in the 8060 byte limit being exceeded, it might not be the column that grew that is moved! Sneaky LOBs Anyway, that’s all very interesting but I don’t want to get too carried away with the intricacies of row-overflow storage internals.  The point is that it is now possible to define a table with non-LOB columns that will silently exceed the old row-size limit and result in ordinary variable-length columns being moved to off-row storage.  Adding new columns to a table, expanding an existing column definition, or simply storing more data in a column than you used to – all these things can result in one or more variable-length columns being moved off the row. Note that row-overflow storage is logically quite different from old-style LOB and new-style MAX data type storage – individual variable-length columns are still limited to 8000 bytes each – you can just have more of them now.  Having said that, the physical mechanisms involved are very similar to full LOB storage – a column moved to row-overflow leaves a 24-byte pointer record in the row, and the ‘separate storage’ I have been talking about is structured very similarly to both old-style LOBs and new-style MAX types.  The disadvantages are also the same: when SQL Server needs a row-overflow column value it needs to follow the in-row pointer a navigate another chain of pages, just like retrieving a traditional LOB. And Finally… In the example script presented above, the rows with row_id values from 402 to 450 inclusive all exceed the total in-row storage limit of 8060 bytes.  A SELECT that references a column in one of those rows that has moved to off-row storage will incur one or more lob logical reads as the storage engine locates the data.  The results on your system might vary slightly depending on your settings, of course; but in my tests only column 1 in rows 402-450 moved off-row.  You might like to play around with the script – updating columns, changing data type lengths, and so on – to see the effect on lob logical reads and which columns get moved when.  You might even see row-overflow columns moving back in-row if they are updated to be smaller (hint: reduce the size of a column entry by at least 1000 bytes if you hope to see this). Be aware that SQL Server will not warn you when it moves ‘ordinary’ variable-length columns into overflow storage, and it can have dramatic effects on performance.  It makes more sense than ever to choose column data types sensibly.  If you make every column a VARCHAR(8000) or NVARCHAR(4000), and someone stores data that results in a row needing more than 8060 bytes, SQL Server might turn some of your column data into pseudo-LOBs – all without saying a word. Finally, some people make a distinction between ordinary LOBs (those that can hold up to 2GB of data) and the LOB-like structures created by row-overflow (where columns are still limited to 8000 bytes) by referring to row-overflow LOBs as SLOBs.  I find that quite appealing, but the ‘S’ stands for ‘small’, which makes expanding the whole acronym a little daft-sounding…small large objects anyone? © Paul White 2011 email: [email protected] twitter: @SQL_Kiwi

    Read the article

  • Windows Azure: Server and Cloud Division

    - by kaleidoscope
    On 8th Dec 2009 Microsoft announced the formation of a new organization within the Server & Tools Business that combines the Windows Server & Solutions group and the Windows Azure group, into a single organization called the Server & Cloud Division (SCD). SCD will deliver solutions that help our customers realize even greater benefits from Microsoft’s investments in on-premises and cloud technologies.  And the new division will help strengthen an already solid and extensive partner ecosystem. Together, Windows Server, Windows Azure, SQL Server, SQL Azure, Visual Studio and System Center help customers extend existing investments to include a future that will combine both on-premises and cloud solutions, and SCD is now a key player in that effort. http://blogs.technet.com/windowsserver/archive/2009/12/08/windows-server-and-windows-azure-come-together-in-a-new-stb-organization-the-server-cloud-division.aspx   Tinu, O

    Read the article

  • Using SMO to drop a SQL Database

    - by ybbest
    SQL Server Management Objects(SMO) is the API you can use to manipulate the sql server,like create databse and delete database. To get more details you can check the msdn documentation. There are 2 ways you can drop a database 1. You could create a Database object and call Drop method: Dim database As Database = New Database(Your database name) database.Drop() 2.However if you have existing connections to the database ,attempting to drop it using the above method will fail.Recall that when you try to drop the database from management studio ,you can tick the check box to close all the connections before drop the database.It is not so obvious , but you can do the exact same thing using SMO: Dim server As Server= New Server(ServerConn) server.KillAllProcesses(Your database name) server.KillDatabase(Your database name)

    Read the article

  • Create and Backup Multiple Profiles in Google Chrome

    - by Asian Angel
    Other browsers such as Firefox and SeaMonkey allow you to have multiple profiles but not Chrome…at least not until now. If you want to use multiple profiles and create backups for them then join us as we look at Google Chrome Backup. Note: There is a paid version of this program available but we used the free version for our article. Google Chrome Backup in Action During the installation process you will run across this particular window. It will have a default user name filled in as shown here…you will not need to do anything except click on Next to continue installing the program. When you start the program for the first time this is what you will see. Your default Chrome Profile will already be visible in the window. A quick look at the Profile Menu… In the Tools Menu you can go ahead and disable the Start program at Windows Startup setting…the only time that you will need the program running is if you are creating or restoring a profile. When you create a new profile the process will start with this window. You can access an Advanced Options mode if desired but most likely you will not need it. Here is a look at the Advanced Options mode. It is mainly focused on adding Switches to the new Chrome Shortcut. The drop-down menu for the Switches available… To create your new profile you will need to choose: A profile location A profile name (as you type/create the profile name it will automatically be added to the Profile Path) Make certain that the Create a new shortcut to access new profile option is checked For our example we decided to try out the Disable plugins switch option… Click OK to create the new profile. Once you have created your new profile, you will find a new shortcut on the Desktop. Notice that the shortcut’s name will be Google Chrome + profile name that you chose. Note: On our system we were able to move the new shortcut to the “Start Menu” without problems. Clicking on our new profile’s shortcut opened up a fresh and clean looking instance of Chrome. Just out of curiosity we did decide to check the shortcut to see if the Switch set up correctly. Unfortunately it did not in this instance…so your mileage with the Switches may vary. This was just a minor quirk and nothing to get excited or upset over…especially considering that you can create multiple profiles so easily. After opening up our default profile of Chrome you can see the individual profile icons (New & Default in order) sitting in the Taskbar side-by-side. And our two profiles open at the same time on our Desktop… Backing Profiles Up For the next part of our tests we decided to create a backup for each of our profiles. Starting the wizard will allow you to choose between creating or restoring a profile. Note: To create or restore a backup click on Run Wizard. When you reach the second part of the process you can go with the Backup default profile option or choose a particular one from a drop-down list using the Select a profile to backup option. We chose to backup the Default Profile first… In the third part of the process you will need to select a location to save the profile to. Once you have selected the location you will see the Target Path as shown here. You can choose your own name for the backup file…we decided to go with the default name instead since it contained the backup’s calendar date. A very nice feature is the ability to have the cache cleared before creating the backup. We clicked on Yes…choose the option that best suits your needs. Once you have chosen either Yes or No the backup will then be created. Click Finish to complete the process. The backup file for our Default Profile at 14.0 MB in size. And the backup file for our Chrome Fresh Profile…2.81 MB. Restoring Profiles For the final part of our tests we decided to do a Restore. Select Restore and click Next to get the process started. In the second step you will need to browse for the Profile Backup File (and select the desired profile if you have created multiples). For our example we decided to overwrite the original Default Profile with the Chrome Fresh Profile. The third step lets you choose where to restore the chosen profile to…you can go with the Default Profile or choose one from the drop-down list using the Restore to a selected profile option. The final step will get you on your way to restoring the chosen profile. The program will conduct a check regarding the previous/old profile and ask if you would like to proceed with overwriting it. Definitely nice in case you change your mind at the last moment. Clicking Yes will finish the restoration. The only other odd quirk that we noticed while using the program was that the Next Button did not function after restoring the profile. You can easily get around the problem by clicking to close the window. Which one is which? After the restore process we had identical twins. Conclusion If you have been looking for a way to create multiple profiles in Google Chrome, then you might want to add this program to your system. Links Download Google Chrome Backup Similar Articles Productive Geek Tips Backup and Restore Firefox Profiles EasilyBackup Different Browsers Easily with FavBackupBackup Your Browser with the New FavBackupStupid Geek Tricks: Compare Your Browser’s Memory Usage with Google ChromeHow to Make Google Chrome Your Default Browser TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows Tech Fanboys Field Guide Check these Awesome Chrome Add-ons iFixit Offers Gadget Repair Manuals Online Vista style sidebar for Windows 7 Create Nice Charts With These Web Based Tools Track Daily Goals With 42Goals

    Read the article

  • OpenGL (glx) not working with ubuntu 12.10

    - by user26766
    It all started when I installed nvidia's own (download from website) driver. Uninstalling it and reverting back to nvidia-current didn't solve the problem, so I have been playing with this for a while. It seems glx support is missing, and my intel graphics is not responding. gnome loads only in fallback mode. Here are some outputs: glxinfo name of display: :0.0 Error: couldn't find RGB GLX visual or fbconfig glxgears Error: couldn't get an RGB, Double-buffered visual optirun glxgears works fine lspci | grep VGA 00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) 01:00.0 VGA compatible controller: NVIDIA Corporation GF108 [GeForce GT 540M] (rev ff) Here's the content of log file when just after running glxinfo: less /var/log/Xorg.0.log | grep gl [ 112.156] (II) LoadModule: "glx" [ 112.157] (II) Loading /usr/lib/xorg/modules/extensions/libglx.so [ 112.157] (II) Module glx: vendor="X.Org Foundation" [ 112.157] (II) UnloadModule: "glx" [ 112.157] (II) Unloading glx [ 112.157] (EE) Failed to load module "glx" (module requirement mismatch, 0) Some more info... lsmod Module Size Used by bbswitch 13612 0 pci_stub 12623 1 vboxpci 23195 0 vboxnetadp 25671 0 vboxnetflt 23480 0 vboxdrv 320372 3 vboxpci,vboxnetadp,vboxnetflt parport_pc 32689 0 ppdev 17074 0 bnep 18141 2 rfcomm 46620 12 binfmt_misc 17501 1 snd_hda_codec_hdmi 32049 1 snd_hda_codec_realtek 78147 1 joydev 17458 0 uvcvideo 76750 0 videobuf2_core 32852 1 uvcvideo videodev 120310 2 uvcvideo,videobuf2_core videobuf2_vmalloc 12861 1 uvcvideo videobuf2_memops 13405 1 videobuf2_vmalloc snd_hda_intel 33492 3 coretemp 13401 0 kvm_intel 132760 0 arc4 12530 2 snd_hda_codec 134213 3 snd_hda_codec_hdmi,snd_hda_codec_realtek,snd_hda_intel kvm 414111 1 kvm_intel iwlwifi 386837 0 i915 520799 2 snd_hwdep 17699 1 snd_hda_codec snd_pcm 96668 3 snd_hda_codec_hdmi,snd_hda_intel,snd_hda_codec psmouse 100389 0 drm_kms_helper 49113 1 i915 ghash_clmulni_intel 13221 0 snd_seq_midi 13325 0 snd_rawmidi 30513 1 snd_seq_midi btusb 22475 0 drm 288436 3 i915,drm_kms_helper serio_raw 13216 0 snd_seq_midi_event 14900 1 snd_seq_midi snd_seq 61555 2 snd_seq_midi,snd_seq_midi_event mac80211 535936 1 iwlwifi bluetooth 209249 22 bnep,rfcomm,btusb snd_timer 29426 2 snd_pcm,snd_seq snd_seq_device 14498 3 snd_seq_midi,snd_rawmidi,snd_seq snd 78921 16 snd_hda_codec_hdmi,snd_hda_codec_realtek,snd_hda_intel,snd_hda_codec,snd_hwdep,snd_pcm,snd_rawmidi,snd_seq,snd_timer,snd_seq_device cfg80211 206797 2 iwlwifi,mac80211 aesni_intel 51038 1 cryptd 20404 2 ghash_clmulni_intel,aesni_intel aes_x86_64 17256 1 aesni_intel dell_wmi 12682 0 sparse_keymap 13891 1 dell_wmi dcdbas 14439 0 i2c_algo_bit 13414 1 i915 microcode 22804 0 lpc_ich 17062 0 soundcore 15048 1 snd snd_page_alloc 18485 2 snd_hda_intel,snd_pcm video 19391 1 i915 mac_hid 13206 0 wmi 19071 1 dell_wmi mei 40691 0 lp 17760 0 parport 46346 3 parport_pc,ppdev,lp ahci 25721 3 libahci 31192 1 ahci atl1c 41102 0 how can I fix this? any ideas? Here is another thing I've tried: sudo apt-get purge nvidia* sudo reboot sudo apt-get install bumblebee bumblebee-nvidia didn't make any difference. The most relevant post I found on the web was this: http://ubuntuforums.org/archive/index.php/t-1722306.html Here it's explained that the problem is with the priority of shared libraries that are loaded for glxinfo. Here's what I get when I look up the libraries: linux-vdso.so.1 => (0x00007fff6bf8b000) libGL.so.1 => /usr/lib/x86_64-linux-gnu/mesa/libGL.so.1 (0x00007f22d6ccd000) libX11.so.6 => /usr/lib/x86_64-linux-gnu/libX11.so.6 (0x00007f22d6993000) libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f22d65d3000) libglapi.so.0 => /usr/lib/x86_64-linux-gnu/libglapi.so.0 (0x00007f22d63ad000) libXext.so.6 => /usr/lib/x86_64-linux-gnu/libXext.so.6 (0x00007f22d619b000) libXdamage.so.1 => /usr/lib/x86_64-linux-gnu/libXdamage.so.1 (0x00007f22d5f97000) libXfixes.so.3 => /usr/lib/x86_64-linux-gnu/libXfixes.so.3 (0x00007f22d5d91000) libX11-xcb.so.1 => /usr/lib/x86_64-linux-gnu/libX11-xcb.so.1 (0x00007f22d5b8f000) libxcb-glx.so.0 => /usr/lib/x86_64-linux-gnu/libxcb-glx.so.0 (0x00007f22d5977000) libxcb.so.1 => /usr/lib/x86_64-linux-gnu/libxcb.so.1 (0x00007f22d5759000) libXxf86vm.so.1 => /usr/lib/x86_64-linux-gnu/libXxf86vm.so.1 (0x00007f22d5553000) libdrm.so.2 => /usr/lib/x86_64-linux-gnu/libdrm.so.2 (0x00007f22d5346000) libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f22d5129000) libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f22d4f25000) /lib64/ld-linux-x86-64.so.2 (0x00007f22d6f54000) libXau.so.6 => /usr/lib/x86_64-linux-gnu/libXau.so.6 (0x00007f22d4d20000) libXdmcp.so.6 => /usr/lib/x86_64-linux-gnu/libXdmcp.so.6 (0x00007f22d4b1a000) librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007f22d4911000) It seems "mesa" instead of nvidia-current is being used. However I didn't find any obvious link to /lib/nvidia-current in /etc/ld.so.conf (where the directories containing the shared libraries are listed). I don't know if there's anything missing or if this is normal. thanks, UPDATE: The problem was solved by updating to Ubuntu 13.04. It seems bumblebee was to blame, but I'm not sure what was going wrong...

    Read the article

  • Sound Problem due to wrong driver installation

    - by Starx
    First of all, the audio used to play from the speaker but whenever I connected a headphone or speaker in the I, the sound didn't came. So I installed a realtek sound driver to fix this (actually I didn't find any relevant driver for my sound card so I installed it hoping it would show some light) When I rebooted the system, my sound is completely gone and now ubuntu does not detect any sound devices in my system. I clearly installed wrong driver, now I need to remove and go to previous stage and find a correct driver. My laptop is: Dell Studio 1747, uses a speaker Creative Labs Sound X-Fi MB Sound Blaster and the chipset is INTEL PM55. How can I fix this?

    Read the article

  • Starting it back up again

    - by Mickey Gousset
    After a couple of year hiatus from blogging at Geeks With Blogs, I’m back!  I’m still blogging about Visual Studio 2010 and TFS 2010 over at Team System Rocks (soon to undergo some major revisions), so expect to see some cross postings from there. Here though, I expect to focus on System Center technologies (mostly System Center Operations Manager and System Center Service Manager, with some of the others thrown in there too, as that is my day job now..  I’ll also use this blog to start tracking my foray into Windows Phone 7 development.  I’ve decided to go the game programming route first.

    Read the article

  • Adventures in Lab Management Configuration: Part 2 of 3

    - by Enrique Lima
    The first post was the high level overview. Now it is time for the details on what was done to the existing CMMI Project based on CMMI v 4.2. The first step was to go into Visual Studio, then from the Team Project Collection Settings and then to the Process Template Manager.  Once there, it was a matter of selecting the appropriate template (MSF for CMMI Process Improvement v5.0) and download to a point I could reference later (for example C:\Templates). Then on to using the steps from the guidance post. Since I was using an x64 deployment, I will make reference to the path as <toolpath>, however the actual path to reference in a 64-bit environment is “C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE”. As I mentioned on the previous post, make sure to first perform a backup of the Configuration, Collection and Warehouse DBs.  If you did not apply any changes to the names and such, then you will find those as tfs_Configuration, tfs_DefaultCollection and tfs_Warehouse. Now, the work needed with the witadmin tool: That includes the uploading of the structures that differ from v4.2 to v5.0 There is likely going to be an issue with the naming of some fields. For example, TFS 2010 likes something along the lines of “Area ID”, whereas TFS 2008 would have had it as “AreaID”.  So, this will need to be corrected.  Some posts will have you go through this after the errors pop up.  I would recommend doing this process prior to executing the importwitd process.  witadmin listfields /collection:<path to collection> > c:\ListFields.txt Review the following fields: AreaID, review the Name property and validate if it states “AreaID”, the you will need to rename the Name field to reflect “Area ID”. ExternalLinkCount, RelatedLinkCount, HyperLinkCount, AttachedFileCount and IterationID would be the other fields to check. To correct the issue, then execute the following: witadmin changefield /collection:<path to collection> /n:"System.ExternalLinkCount" /name:"External Link Count" Repeat for Area ID, Related Link Count, Hyperlink Count, Attached File Count and Iteration ID.  Once this is done, proceed with the commands below. witadmin importwitd /collection:<path to collection> /p:<project> /f:"<path to downloaded template>\MSF for CMMI Process Improvement v5.0\WorkItem Tracking\TypeDefinitions\TestCase.xml" witadmin importwitd /collection:<path to collection> /p:<project> /f:"<path to downloaded template>\MSF for CMMI Process Improvement v5.0\WorkItem Tracking\TypeDefinitions\SharedStep.xml" witadmin importcategories /collection:<path to collection> /p:<project> /f:"<path to downloaded template>\MSF for CMMI Process Improvement v5.0\WorkItem Tracking\categories.xml" Modifications to the Bug Definition: First step is to export the existing definition. witadmin exportwitd /collection<path to collection> /p:<project> /n:bug /f:"<path to downloaded template>\MSF for CMMI Process Improvement v5.0\MyBug.xml" Make modifications to recently exported MyBug.xml file.  Details for the modification are here:  http://msdn.microsoft.com/en-us/library/ff452591.aspx#ModifyTask Once the changes are done, proceed with the import command witadmin importwitd /collection:<path to collection> /p: <project> /f:"<path to downloaded template>\MSF for CMMI Process Improvement v5.0\MyBug.xml" Repeat the process for the the Scenario or Requirement Type Definition witadmin exportwitd /collection<path to collection> /p:<project> /n:requirement /f:"<path to downloaded template>\MSF for CMMI Process Improvement v5.0\MyRequirement.xml" Make modifications to recently exported MyRequirement.xml file.  Details for the modification are here:  http://msdn.microsoft.com/en-us/library/ff452591.aspx#ModifyTask Once the changes are done, proceed with the import command witadmin importwitd /collection:<path to collection> /p: <project> /f:"<path to downloaded template>\MSF for CMMI Process Improvement v5.0\MyRequirement.xml" Provide the Bug Field Mapping definition, after creating the file as specified here: http://msdn.microsoft.com/en-us/library/ff452591.aspx#TCMBugFieldMapping tcm bugfieldmapping /import /mappingfile:"<path to downloaded template>\MSF for CMMI Process Improvement v5.0\bugfieldmappings.xml" /collection:<path to collection> /teamproject:<project name>

    Read the article

  • Adding a custom document template to the document Library

    - by ybbest
    After you create a SharePoint document library, you can start creating document based on the default document template. If you like to add you own custom template, you can easily achieve this by creating a SharePoint solution using visual studio. In this post, I’d like to show how to add a custom document template to the SharePoint document Library. You can download the complete source code here. 1. Create Empty SharePoint solution, creating a document library called “YbbestCustomDocLib” and adding a Module with a word document template called FAX.dotx 2. Modify the Elements.xml file in the module FROM TO 3. Finally, you need to create feature receiver to configure the Document TemplateUrl property of the document library. You can download the complete source code here.

    Read the article

  • How do I generate terrain like that of Scorched Earth?

    - by alex
    Hi, I'm a web developer and I am keen to start writing my own games. For familiarity, I've chosen JavaScript and canvas element for now. I want to generate some terrain like that in Scorched Earth. My first attempt made me realise I couldn't just randomise the y value; there had to be some sanity in the peaks and troughs. I have Googled around a bit, but either I can't find something simple enough for me or I am using the wrong keywords. Can you please show me what sort of algorithm I would use to generate something in the example, keeping in mind that I am completely new to games programming (since making Breakout in 2003 with Visual Basic anyway)?

    Read the article

  • Turn off laptop display and have monitor

    - by Ryan B
    My laptop display has stopped working and is unfixable! Anyway, I got a full HD monitor and plugged that into the HDMI port of the laptop. I have changed the power settings so that I can close the lid and still have the monitor on, and this all works. However because it is on "mirror displays", I cannot get the full 1080p resolution that the monitor supports, and when I turn off mirror displays and switch off the laptop screen, the monitor will also switch off! I'm really stuck, as I can only get the monitor to display its full resolution if I have it with the laptop screen turned on, and I dont want this because (as the laptop display does not work) things get lost on that side of the screen. HELP. (Dell Studio 1537 Monitor is Samsung SA300 Connected through HDMI Ubuntu 11.10)

    Read the article

< Previous Page | 703 704 705 706 707 708 709 710 711 712 713 714  | Next Page >