Search Results

Search found 22968 results on 919 pages for 'stuck again'.

Page 190/919 | < Previous Page | 186 187 188 189 190 191 192 193 194 195 196 197  | Next Page >

  • RegexClean Transformation

    Use the power of regular expressions to cleanse your data right there inside the Data Flow. This transformation includes a full user interface for simple configuration, as well as advanced features such as error output configuration. Two regular expressions are used, a match expression and a replace expression. The transformation is designed around the named capture groups or match groups, and even supports multiple expressions. This allows for rich and complex expressions to be built, all through an easy to reuse transformation where a bespoke Script Component was previously the only alternative. Some simple properties are available for each column selected – Behaviour The two behaviour modes offer similar functionality but with a difference. Replace, replaces tokens with the input, and Emit overwrites the whole string. Cascade Cascade allows you to define multiple expressions, each on a new line. The match expression will be processed into one operation per line, which are then processed in order at run-time. Multiple replace expressions can also be specified, again each on a new line. If there is no corresponding replace expression for a match expression line, then the last replace expression will be used instead. It is common to have multiple match expressions, but only a single replace expression. Match Expression The expression used to define the named capture groups. This is where you can analyse the data, and tag or name elements within it as found by the match expression. Replace Expression The replace determines the final output. It will reference the named groups from the match expression and assembles them into the final output. If you want to use regular expressions to validate data then try the Regular Expression Transformation. Quick Start Guide Select a column. A new output column is created for each selected column; there is no option for in-place replacement of column values. One input column can be used to populate multiple output columns, just select the column again in the lower grid, using the Input Columns drop-down selector. Amend the output column name and size as required. They default to the same as the input column selected. Amend the behaviour as required, the default is Replace. Amend the cascade option as required, the default is true. Finally enter your match and replace regular expressions Quick Sample #1 Parse an email address and extract the user and domain portions. Format as a web address passing the user portion as a URL parameter. This uses two match groups, user and host, which correspond to the text before the @ and after it respectively. Behaviour is Emit, and cascade of false, we only have a single match expression. Match Expression ^(?<user>[^@]+)@(?<host>.+)$ Replace Expression - http://www.${host}?user=${user} Results Sample Input Sample Output [email protected] http://www.adventure-works.com?user=zheng0 The component is provided as an MSI file, however to complete the installation, you will have to add the transformation to the Visual Studio toolbox manually. Right-click the toolbox, and select Choose Items.... Select the SSIS Data Flow Items tab, and then check the RegexClean Transformation from the list. Downloads The RegexClean Transformation is available for both SQL Server 2005 and SQL Server 2008. Please choose the version to match your SQL Server version, or you can install both versions and use them side by side if you have both SQL Server 2005 and SQL Server 2008 installed. RegexClean Transformation for SQL Server 2005 RegexClean Transformation for SQL Server 2008 Version History SQL Server 2005 Version 1.0.0.105 - Public Release (28 Jan 2008) SQL Server 2005 Version 1.0.0.105 - Public Release (28 Jan 2008) Screenshot

    Read the article

  • Ubuntu 13.04 running really slow and Hanging

    - by CAM
    Up till recently I have been running 13.04 on my laptop very happily. This morning however, I turned on my laptop to find it running really slow. Takes 5 min to load a program and even then the program freezes and I have had 3 system hangs this morning already. The Unity Desktop appears to run ok but programs do not. Things I have tried so far: Checking for Propitiatory graphics drivers - none shown available (I have bumblebee running already). Using the recovery boot options from Grub to repair broken packages. Recent changes - Updated computer, Installed some indicator applets which have worked fine for me before. System Specs: Asus U36s, Intel Core i5-2450M 2.5GHz, 4GB RAM, Nvidia Geforce 610M-1GB, Dual boot Win7 & Ubuntu 13.04 I'm a bit of a noob with Ubuntu but am happy enough running stuff in terminal if you will advise me on what to run. I'm just a bit stuck on what do to fix this without a reinstall. Thanks a lot for your help.

    Read the article

  • Data Quality and Master Data Management Resources

    - by Dejan Sarka
    Many companies or organizations do regular data cleansing. When you cleanse the data, the data quality goes up to some higher level. The data quality level is determined by the amount of work invested in the cleansing. As time passes, the data quality deteriorates, and you need to repeat the cleansing process. If you spend an equal amount of effort as you did with the previous cleansing, you can expect the same level of data quality as you had after the previous cleansing. And then the data quality deteriorates over time again, and the cleansing process starts over and over again. The idea of Data Quality Services is to mitigate the cleansing process. While the amount of time you need to spend on cleansing decreases, you will achieve higher and higher levels of data quality. While cleansing, you learn what types of errors to expect, discover error patterns, find domains of correct values, etc. You don’t throw away this knowledge. You store it and use it to find and correct the same issues automatically during your next cleansing process. The following figure shows this graphically. The idea of master data management, which you can perform with Master Data Services (MDS), is to prevent data quality from deteriorating. Once you reach a particular quality level, the MDS application—together with the defined policies, people, and master data management processes—allow you to maintain this level permanently. This idea is shown in the following picture. OK, now you know what DQS and MDS are about. You can imagine the importance on maintaining the data quality. Here are some resources that help you preparing and executing the data quality (DQ) and master data management (MDM) activities. Books Dejan Sarka and Davide Mauri: Data Quality and Master Data Management with Microsoft SQL Server 2008 R2 – a general introduction to MDM, MDS, and data profiling. Matching explained in depth. Dejan Sarka, Matija Lah and Grega Jerkic: MCTS Self-Paced Training Kit (Exam 70-463): Building Data Warehouses with Microsoft SQL Server 2012 – I wrote quite a few chapters about DQ and MDM, and introduced also SQL Server 2012 DQS. Thomas Redman: Data Quality: The Field Guide – you should start with this book. Thomas Redman is the father of DQ and MDM. Tyler Graham: Microsoft SQL Server 2012 Master Data Services – MDS in depth from a product team mate. Arkady Maydanchik: Data Quality Assessment – data profiling in depth. Tamraparni Dasu, Theodore Johnson: Exploratory Data Mining and Data Cleaning – advanced data profiling with data mining. Forthcoming presentations I am presenting a DQS and MDM seminar at PASS SQL Rally Amsterdam 2013: Wednesday, November 6th, 2013: Enterprise Information Management with SQL Server 2012 – a good kick start to your first DQ and / or MDM project. Courses Data Quality and Master Data Management with SQL Server 2012 – I wrote a 2-day course for SolidQ. If you are interested in this course, which I could also deliver in a shorter seminar way, you can contact your closes SolidQ subsidiary, or, of course, me directly on addresses [email protected] or [email protected]. This course could also complement the existing courseware portfolio of training providers, which are welcome to contact me as well. Start improving the quality of your data now!

    Read the article

  • Want a headless build server for SSDT without installing Visual Studio? You’re out of luck!

    - by jamiet
    An issue that regularly seems to rear its head on my travels is that of headless build servers for SSDT. What does that mean exactly? Let me give you my interpretation of it. A SQL Server Data Tools (SSDT) project incorporates a build process that will basically parse all of the files within the project and spit out a .dacpac file. Where an organisation employs a Continuous Integration process they will likely want to automate the building of that dacpac whenever someone commits a change to the source control repository. In order to do that the organisation will use a build server (e.g. TFS, TeamCity, Jenkins) and hence that build server requires all the pre-requisite software that understands how to build an SSDT project. The simplest way to install all of those pre-requisites is to install SSDT itself however a lot of folks don’t like that approach because it installs a lot unnecessary components on there, not least Visual Studio itself. Those folks (of which i am one) are of the opinion that it should be unnecessary to install a heavyweight GUI in order to simply get a few software components required to do something that inherently doesn’t even need a GUI. The phrase “headless build server” is often used to describe a build server that doesn’t contain any heavyweight GUI tools such as Visual Studio and is a desirable state for a build server. In his blog post Headless MSBuild Support for SSDT (*.sqlproj) Projects Gert Drapers outlines the steps necessary to obtain a headless build server for SSDT: This article describes how to install the required components to build and publish SQL Server Data Tools projects (*.sqlproj) using MSBuild without installing the full SQL Server Data Tool hosted inside the Visual Studio IDE. http://sqlproj.com/index.php/2012/03/headless-msbuild-support-for-ssdt-sqlproj-projects/ Frankly however going through these steps is a royal PITA and folks like myself have longed for Microsoft to support headless build support for SSDT by providing a distributable installer that installs only the pre-requisites for building SSDT projects. Yesterday in MSDN forum thread Building a VS2013 headless build server - it's sooo hard Mike Hingley complained about this very thing and it prompted a response from Kevin Cunnane from the SSDT product team: The official recommendation from the TFS / Visual Studio team is to install the version of Visual Studio you use on the build machine. I, like many others, would rather not have to install full blown Visual Studio and so I asked: Is there any chance you'll ever support any of these scenarios: Installation of all build/deploy pre-requisites without installing the VS shell? TFS shipping with all of the pre-requisites for doing SSDT project build/deploys 3rd party build servers (e.g. TeamCity) shipping with all of the requisites for doing SSDT project build/deploys I have to say that the lack of a single installer containing all the pre-requisites for SSDT build/deploy puzzles me. Surely the DacFX installer would be a perfect vehicle for that? Kevin replied again: The answer is no for all 3 scenarios. We looked into this issue, discussed it with the Visual Studio / TFS team, and in the end agreed to go with their latest guidance which is to install Visual Studio (e.g. VS2013 Express for Web) on the build machine. This is how Visual Studio Online is doing it and it's the approach recommended for customers setting up their own TFS build servers. I would hope this is compatible with 3rd party build servers but have not verified whether this works with TeamCity etc. Note that DacFx MSI isn't a suitable release vehicle for this as we don't want to include Visual Studio/MSBuild dependencies in that package. It's meant to just include the core DacFx DLLs used by SSMS, SqlPackage.exe on the command line, etc. What this means is we won't be providing a separate MSI installer or nuget package with just the necessary build DLLs you need to run your build and tests. If someone wanted to create a script that generated a nuget package based on our DLLs and targets files, then release that somewhere on the web for easier integration with 3rd party build servers we've no problem with that. Again, here’s the link to the thread and its worth reading in its entirety if this is something that interests you. So there you have it. Microsoft will not be be providing support for headless build servers for SSDT but if someone in the community wants to go ahead and roll their own, go right ahead. @Jamiet

    Read the article

  • Allen for Umbraco with location EXIF meta data

    - by Vizioz Limited
    The latest version of Allen for Umbraco has now hit the Apple App store, we have managed to add some nice improvements to this version that include:Storing location and direction information when photos are taken within the AppEmbedding EXIF data into the images when they are uploadBackground UploadingPull to refresh the media tree Location and DirectionBy default when the camera is used within an application the location and direction that the camera is pointing is not stored within the image meta data. We have now added full support so that this data is now added. We have added a setting which allows you to prevent this data from being uploaded to your website if you do not want the location data to be sent you can turn it off within Allen, Note: Please don't forget that location services do need to be turned on to allow the app to access the images in the phone's asset library.We have had quite a few ideas from users already for using this location data, including logging free parking in Denmark to geo-tagging holiday photos and linking the photos to Google street view. Embedding EXIF dataWe now embed all the meta data available on the iPhone into the image when it is uploaded to your server, this allows you to pull the data out and use it within your site. Have a look at Cultiv's Photo Meta Data package for great example code that allows you to automatically pull this data out and populate properties on your Umbraco media item.We slightly modified the source code of this package to allow the package to always extract the image data, as the default package requires a property to allow the data to be extracted, it's an easy change, if you get stuck add a comment to this post. Background UploadingIf you try to upload multiple images and need to start doing something else on your phone, you can now click the home button and the application will continue to upload your images in the background. As soon as it has finished you will receive a standard Apple notification. Pull to RefreshOur final enhancement has been to add "Pull to refresh" to the media trees, just pull the tree downwards with your finger and it will refresh, this is useful if you are adding items to your media tree while testing your site with Allen for Umbraco. Future enhancements.. your ideas?If you have any ideas for future enhancement feel free to add a comment below!

    Read the article

  • How do you measure the value of your software?

    - by Mike
    Hi, One of the principles of agile is that you should measure working software: Working software is the primary measure of progress - 12 principles of Agile The thing is, while I can measure my software in terms of stories done, bugs squashed or the volume of defect reports decreasing, I'm stuck on how to measure the value of my software. If I use Mike Cohn as an example and his helping SalesForce.com deliver 500% more value to it's customers compared to the previous year* - how do I measure that increase? How do I measure where I am right now? Other metrics he uses are the number of features and the number of features per developer. This is something I could work out if my backlog was in good order and the stories were cut up by 'feature', but we're just starting out with Agile, so I need some way of working out what the value is we deliver now, then use a similar metric in say, six months, to see if we've increased our output. I've heard about measuring value of software by an uptick in revenue, or an increase in customer satisfaction (how would you measure that though?) but those increases could be attributed to anything in the company (sales, accounting, support) and not directly to the work my department is doing. So, how do you guys measure the value of your software and how did you start? Thanks, Mike *Succeeding With Agile - Mike Cohn

    Read the article

  • Where Next for Google Translate? And What of Information Quality?

    - by ultan o'broin
    Fascinating article in the UK Guardian newspaper called Can Google break the computer language barrier? In it, Andreas Zollman, who works on Google Translate, comments that the quality of Google Translate's output relative to the amount of data required to create that output is clearly now falling foul of the law of diminishing returns. He says: "Each doubling of the amount of translated data input led to about a 0.5% improvement in the quality of the output," he suggests, but the doublings are not infinite. "We are now at this limit where there isn't that much more data in the world that we can use," he admits. "So now it is much more important again to add on different approaches and rules-based models." The Translation Guy has a further discussion on this, called Google Translate is Finished. He says: "And there aren't that many doublings left, if any. I can't say how much text Google has assimilated into their machine translation databases, but it's been reported that they have scanned about 11% of all printed content ever published. So double that, and double it again, and once more, shoveling all that into the translation hopper, and pretty soon you get the sum of all human knowledge, which means a whopping 1.5% improvement in the quality of the engines when everything has been analyzed. That's what we've got to look forward to, at best, since Google spiders regularly surf the Web, which in its vastness dwarfs all previously published content. So to all intents and purposes, the statistical machine translation tools of Google are done. Outstanding job, Googlers. Thanks." Surprisingly, all this analysis hasn't raised that much comment from the fans of machine translation, or its detractors either for that matter. Perhaps, it's the season of goodwill? What is clear to me, however, of course is that Google Translate isn't really finished (in any sense of the word). I am sure Google will investigate and come up with new rule-based translation models to enhance what they have already and that will also scale effectively where others didn't. So too, will they harness human input, which really is the way to go to train MT in the quality direction. But that aside, what does it say about the quality of the data that is being used for statistical machine translation in the first place? From the Guardian article it's clear that a huge humanly translated corpus drove the gains for Google Translate and now what's left is the dregs of badly translated and poorly created source materials that just can't deliver quality translations. There's a message about information quality there, surely. In the enterprise applications space, where we have some control over content this whole debate reinforces the relationship between information quality at source and translation efficiency, regardless of the technology used to do the translation. But as more automation comes to the fore, that information quality is even more critical if you want anything approaching a scalable solution. This is important for user experience professionals. Issues like user generated content translation, multilingual personalization, and scalable language quality are central to a superior global UX; it's a competitive issue we cannot ignore.

    Read the article

  • SQL SERVER – Data Pages in Buffer Pool – Data Stored in Memory Cache

    - by pinaldave
    This will drop all the clean buffers so we will be able to start again from there. Now, run the following script and check the execution plan of the query. Have you ever wondered what types of data are there in your cache? During SQL Server Trainings, I am usually asked if there is any way one can know how much data in a table is stored in the memory cache? The more detailed question I usually get is if there are multiple indexes on table (and used in a query), were the data of the single table stored multiple times in the memory cache or only for a single time? Here is a query you can run to figure out what kind of data is stored in the cache. USE AdventureWorks GO SELECT COUNT(*) AS cached_pages_count, name AS BaseTableName, IndexName, IndexTypeDesc FROM sys.dm_os_buffer_descriptors AS bd INNER JOIN ( SELECT s_obj.name, s_obj.index_id, s_obj.allocation_unit_id, s_obj.OBJECT_ID, i.name IndexName, i.type_desc IndexTypeDesc FROM ( SELECT OBJECT_NAME(OBJECT_ID) AS name, index_id ,allocation_unit_id, OBJECT_ID FROM sys.allocation_units AS au INNER JOIN sys.partitions AS p ON au.container_id = p.hobt_id AND (au.type = 1 OR au.type = 3) UNION ALL SELECT OBJECT_NAME(OBJECT_ID) AS name, index_id, allocation_unit_id, OBJECT_ID FROM sys.allocation_units AS au INNER JOIN sys.partitions AS p ON au.container_id = p.partition_id AND au.type = 2 ) AS s_obj LEFT JOIN sys.indexes i ON i.index_id = s_obj.index_id AND i.OBJECT_ID = s_obj.OBJECT_ID ) AS obj ON bd.allocation_unit_id = obj.allocation_unit_id WHERE database_id = DB_ID() GROUP BY name, index_id, IndexName, IndexTypeDesc ORDER BY cached_pages_count DESC; GO Now let us run the query above and observe the output of the same. We can see in the above query that there are four columns. Cached_Pages_Count lists the pages cached in the memory. BaseTableName lists the original base table from which data pages are cached. IndexName lists the name of the index from which pages are cached. IndexTypeDesc lists the type of index. Now, let us do one more experience here. Please note that you should not run this test on a production server as it can extremely reduce the performance of the database. DBCC DROPCLEANBUFFERS This will drop all the clean buffers and we will be able to start again from there. Now run following script and check the execution plan for the same. USE AdventureWorks GO SELECT UnitPrice, ModifiedDate FROM Sales.SalesOrderDetail WHERE SalesOrderDetailID BETWEEN 1 AND 100 GO The execution plans contain the usage of two different indexes. Now, let us run the script that checks the pages cached in SQL Server. It will give us the following output. It is clear from the Resultset that when more than one index is used, datapages related to both or all of the indexes are stored in Memory Cache separately. Let me know what you think of this article. I had a great pleasure while writing this article because I was able to write on this subject, which I like the most. In the next article, we will exactly see what data are cached and those that are not cached, using a few undocumented commands. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: DMV, Pinal Dave, SQL, SQL Authority, SQL Optimization, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: SQL DMV

    Read the article

  • Cannot get realtek8188ee to work in 14.04

    - by dang42
    I have a Toshiba Satellite C55-A5300 laptop. When I run lspci -nn it shows 02:00.0 Network controller [0280]: Realtek Semiconductor Co., Ltd. RTL8188EE Wireless Network Adapter [10ec:8179] (rev 01) It has always had the common problem others have asked about here (and many, many other places on the web) where it would connect, then drop the connection at random intervals. I tried every solution I could find, here & elsewhere, and they always caused errors after running "make" (more details below), but as I could still connect to networks I just dealt with it. I upgraded to 14.04 a few days ago and now it won't connect at all - I need help getting this to work. I originally followed the instructions posted by chili555 found here: Wireless not working on Toshiba Satellite C55-A5281, but I get the following errors when running "make": /home/dan/backports-3.11-rc3-1/net/wireless/sysfs.c:151:2: error: unknown field ‘dev_attrs’ specified in initializer .dev_attrs = ieee80211_dev_attrs, ^ /home/dan/backports-3.11-rc3-1/net/wireless/sysfs.c:151:2: warning: initialization from incompatible pointer type [enabled by default] /home/dan/backports-3.11-rc3-1/net/wireless/sysfs.c:151:2: warning: (near initialization for ‘ieee80211_class.suspend’) [enabled by default] make[6]: * [/home/dan/backports-3.11-rc3-1/net/wireless/sysfs.o] Error 1 make[5]: [/home/dan/backports-3.11-rc3-1/net/wireless] Error 2 make[4]: [module/home/dan/backports-3.11-rc3-1] Error 2 make[3]: [modules] Error 2 make2: [modules] Error 2 make1: * [modules] Error 2 make: * [default] Error 2 I have no clue how to diagnose the problem or how to proceed from here. I also don't know what information one might need from me in order to move forward. I'll be happy to share anything you'd like to know if it results in this thing (finally!) working properly. Thanks in advance for any / all help. ETA: I did see this post - Realtek 8188ee wireless driver SOLVED - and it looks like it is discussing the same problem I'm having, but I cannot for the life of me figure out what I had to add the testing repository to my /etc/apt/sources.list means, so I am still stuck.

    Read the article

  • Unity - Mecanim & Rigidbody on Third Person Controller - Gravity bug?

    - by Celtc
    I'm working on a third person controller which uses physX to interact with the other objects (using the Rigidbody component) and Mecanim to animate the character. All the animations used are baked to Y, and the movement on this axis is controlled by the gravity applied by the rigidbody component. The configuration of the falling animation: And the character components configuration: Since the falling animation doesn't have root motion on XZ, I move the character on XZ by code. Like this: // On the Ground if (IsGrounded()) { GroundedMovementMgm(); // Stores the velocity velocityPreFalling = rigidbody.velocity; } // Mid-Air else { // Continue the pre falling velocity rigidbody.velocity = new Vector3(velocityPreFalling.x, rigidbody.velocity.y, velocityPreFalling.z); } The problem is that when the chracter starts falling and hit against a wall in mid air, it gets stuck to the wall. Here are some pics which explains the problems: Hope someone can help me. Thanks and sory for my bad english! PD.: I was asked for the IsGrounded() function, so I'm adding it: void OnCollisionEnter(Collision collision) { if (!grounded) TrackGrounded(collision); } void OnCollisionStay(Collision collision) { TrackGrounded(collision); } void OnCollisionExit() { grounded = false; } public bool IsGrounded() { return grounded; } private void TrackGrounded(Collision collision) { var maxHeight = capCollider.bounds.min.y + capCollider.radius * .9f; foreach (var contact in collision.contacts) { if (contact.point.y < maxHeight && Vector3.Angle(contact.normal, Vector3.up) < maxSlopeAngle) { grounded = true; break; } } } I'll also add a LINK to download the project if someone wants it.

    Read the article

  • light.exe : error LGHT0217: Error executing ICE action &lsquo;ICE*&rsquo; with BizTalk Deployment Framework &amp; TFS 2010 Build.

    - by Vishal
    Hi there, Recently I was working with BizTalk Deployment Framework v5.0 for my BizTalk Sever 2009 projects and TFS 2010 Builds. I had followed all the steps mentioned in the BTDF documentation to create the build definition and also followed the steps to setup the Build Server with BTDF. After few hiccups I was stuck at this light.exe validation error. The detailed error is as below: light.exe : error LGHT0217: Error executing ICE action 'ICE06'. The most common cause of this kind of ICE failure is an incorrectly registered scripting engine. See http://wix.sourceforge.net/faq.html#Error217 for details and how to solve this problem. The following string format was not expected by the external UI message logger: "The Windows Installer Service could not be accessed. This can occur if the Windows Installer is not correctly installed. Contact your support personnel for assistance." I found few blog posts and forum answers out there which had steps to resolve the error but all of them mentioned different things. Like: Some mentioned that disable the validation itself so that the VBScript would not be called. (I didn’t want to disable the validation.) Another said to put NT AUTHORITY\NETWORK SERVICE in the local Administrators group but this is not recommended as it opens up network security holes. But what actually worked for me was: TFS Build Service Account was part of the Administrator group on the Build Server which is OK. But somehow it was also part of the IIS_IUSR Group. I removed the TFS Build Service Account from the IIS_IUSR group. Queued up my TFS Build but same error. Again after some digging, I found that I had not restarted the Visual Studio Team Build Service. In a Nutshell: Remove TFS Build Service Account from IIS_IUSR Group. Restart the Visual Studio Team Build Service, either from Services or TFS Console.   Hope this resolves the issue for someone and not waste bunch of hours.   Thanks, Vishal Mody

    Read the article

  • IE9 RC fixed the “Internet Explorer cannot display the webpage” error when running an ASP.NET application in Visual Studio

    - by Jon Galloway
    One of the obstacles ASP.NET developers faced in using the Internet Explorer 9 Beta was the dreaded “Internet Explorer cannot display the webpage” error when running an ASP.NET application in Visual Studio. In the bug information on Connect (issue 601047), Eric Lawrence said that the problem was due to “caused by failure to failover from IPv6 to IPv4 when the connection is local.” Robert MacLean gives some more information as what was going wrong: “The problem is Windows, especially since it assumes IPv6 is better than IPv4. Note […] that when you ping localhost you get an IPv6 address. So what appears to be happening is when IE9 tries to go to localhost it uses IPv6, and the ASP.NET Development Server is IPv4 only and so nothing loads and we get the error.” The Simple Fix - Install IE 9 RC Internet Explorer 9 RC fixes this bug, so if you had tried IE 9 Beta and stopped using it due to problems with ASP.NET development, install the RC. The Workaround in IE 9 Beta If you're stuck on IE 9 Beta for some reason, you can follow Robert's workaround, which involves a one character edit to your hosts file. I've been using it for months, and it works great. Open notepad (running as administrator) and edit the hosts file (found in %systemroot%\system32\drivers\etc) Remove the # comment character before the line starting with 127.0.0.1 Save the file - if you have problems saving, it's probably because you weren't running as administrator When you're done, your hosts file will end with the following lines (assuming you were using a default hosts file setup beforehand): # localhost name resolution is handled within DNS itself.     127.0.0.1       localhost #    ::1             localhost Note: more information on editing your hosts file here. This causes Windows to default to IPv4 when resolving localhost, which will point to 127.0.0.1, which is right where Cassini - I mean the ASP.NET Web Development Server - is waiting for it.

    Read the article

  • Backup and Transfer Foobar2000 to a New Computer

    - by Mysticgeek
    If you are a fan of Foobar2000 you undoubtedly have tweaked it to the point where you don’t want to set it all up again on a new machine. Here we look at how to transfer Foobar2000 settings to a new Windows 7 machine. Note: For this article we are transferring Foobar2000 settings from on Windows 7 machine to another over a network running Windows Home Server.  Foobar2000 Foobar2000 is an awesome music player which is highly customizable and we’ve previously covered. Here we take a look at how it’s set up on the current machine. It’s a nothing flashy, but is set up for our needs and includes a lot of components and playlists.   Backup Files Rather than wasting time setting everything up again on a new machine, we can backup the important files and replace them on the new machine. First type or copy the following into the Explorer address bar. %appdata%\foobar2000 Now copy all of the files in the folder and store them on a network drive or some type removable media or device. New Machine Now you can install the latest version of Foobar2000 on your new machine. You can go with a Standard install as we will be replacing our backed up configuration files anyway. When it launches, it will be set with all the defaults…and we want what we had back. Browse to the following on the new machine… %appdata%\foobar2000 Delete all of the files in this directory… Then replace them with the ones we backed up from the other machine. You’ll also want to navigate to C:\Program Files\Foobar2000 and replace the existing Components folder with the backed up one. When you get the screen telling you there is already files of the same name, select Move and Replace, and check the box Do this for the next 6 conflicts. Now we’re back in business! Everything is exactly as it was on the old machine. In this example, we were moving the Foobar2000 files from a computer on the same home network. All the music is coming from a directory on our Windows Home Server so they hadn’t changed. If you’re moving these files to a computer on another machine… say your work computer, you’ll need to adjust where the music folders point to. Windows XP If you’re setting up Foobar2000 on an XP machine, you can enter the following into the Run line. %appdata%\foobar2000 Then copy your backed up files into the Foobar2000 folder, and remember to swap out the Components folder in C:\Program Files\Foobar2000. Confirm to replace the files and folders by clicking Yes to All… Conclusion This method worked perfectly for us on our home network setup. There might be some other things that will need a bit of tweaking, but overall the process is quick and easy. There is a lot of cool things you can do with Foobar2000 like rip an audio CD to FlAC. If you’re a fan of Foobar2000 or considering switching to it, we will be covering more awesome features in future articles. Download Foobar2000 – Windows Only Similar Articles Productive Geek Tips Backup or Transfer Microsoft Office 2007 Quick Parts Between ComputersBackup and Restore Internet Explorer’s Trusted Sites ListSecond Copy 7 [Review]Backup and Restore Firefox Profiles EasilyFoobar2000 is a Fully Customizable Music Player TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 Windows Media Player Glass Icons (icons we like) How to Forecast Weather, without Gadgets Outlook Tools, one stop tweaking for any Outlook version Zoofs, find the most popular tweeted YouTube videos Video preview of new Windows Live Essentials 21 Cursor Packs for XP, Vista & 7

    Read the article

  • How do NTP Servers Manage to Stay so Accurate?

    - by Akemi Iwaya
    Many of us have had the occasional problem with our computers and other devices retaining accurate time settings, but a quick sync with an NTP server makes all well again. But if our own devices can lose accuracy, how do NTP servers manage to stay so accurate? Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites. Photo courtesy of LEOL30 (Flickr). The Question SuperUser reader Frank Thornton wants to know how NTP servers are able to remain so accurate: I have noticed that on my servers and other machines, the clocks always drift so that they have to sync up to remain accurate. How do the NTP server clocks keep from drifting and always remain so accurate? How do the NTP servers manage to remain so accurate? The Answer SuperUser contributor Michael Kjorling has the answer for us: NTP servers rely on highly accurate clocks for precision timekeeping. A common time source for central NTP servers are atomic clocks, or GPS receivers (remember that GPS satellites have atomic clocks onboard). These clocks are defined as accurate since they provide a highly exact time reference. There is nothing magical about GPS or atomic clocks that make them tell you exactly what time it is. Because of how atomic clocks work, they are simply very good at, having once been told what time it is, keeping accurate time (since the second is defined in terms of atomic effects). In fact, it is worth noting that GPS time is distinct from the UTC that we are more used to seeing. These atomic clocks are in turn synchronized against International Atomic Time or TAI in order to not only accurately tell the passage of time, but also the time. Once you have an exact time on one system connected to a network like the Internet, it is a matter of protocol engineering enabling transfer of precise times between hosts over an unreliable network. In this regard a Stratum 2 (or farther from the actual time source) NTP server is no different from your desktop system syncing against a set of NTP servers. By the time you have a few accurate times (as obtained from NTP servers or elsewhere) and know the rate of advancement of your local clock (which is easy to determine), you can calculate your local clock’s drift rate relative to the “believed accurate” passage of time. Once locked in, this value can then be used to continuously adjust the local clock to make it report values very close to the accurate passage of time, even if the local real-time clock itself is highly inaccurate. As long as your local clock is not highly erratic, this should allow keeping accurate time for some time even if your upstream time source becomes unavailable for any reason. Some NTP client implementations (probably most ntpd daemon or system service implementations) do this, and others (like ntpd’s companion ntpdate which simply sets the clock once) do not. This is commonly referred to as a drift file because it persistently stores a measure of clock drift, but strictly speaking it does not have to be stored as a specific file on disk. In NTP, Stratum 0 is by definition an accurate time source. Stratum 1 is a system that uses a Stratum 0 time source as its time source (and is thus slightly less accurate than the Stratum 0 time source). Stratum 2 again is slightly less accurate than Stratum 1 because it is syncing its time against the Stratum 1 source and so on. In practice, this loss of accuracy is so small that it is completely negligible in all but the most extreme of cases. Have something to add to the explanation? Sound off in the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.

    Read the article

  • You Might Be a SharePoint Professional If&hellip;

    - by Mark Rackley
    I really think no explanation is needed. Hope this makes you smile.. Thanks again for being an awesome SharePoint community! If you can only dream about working an 8 hour day, there’s a good chance you are a SharePoint professional. You might be a SharePoint professional if the last time you heard “Old MacDonald Had a Farm” you wondered “How many web front ends does it have?” If you consider Twitter the best form of support since the dawn of the Internet, you might be a SharePoint professional. If you are giddy-as-a-school-girl excited about going to Anaheim in October and it has NOTHING to do with Disneyland, you might be a SharePoint professional. You might be a SharePoint professional if you own more SharePoint shirts than you do pairs of underwear. If you’ve thought of giving up a career in the IT world for a job taking orders at a fast food chain, you might be a SharePoint professional. You might be a SharePoint professional if the only people who understand the words that come out of your mouth are other SharePoint people. If you put the word “Share” or “SP” in front of EVERYTHING (ShareFood, SPRunner, etc… etc…) then you might be a SharePoint professional. You are probably a SharePoint professional if you love SharePoint.. you hate SharePoint… you love SharePoint… you hate SharePoint… If the only thing you’d rather do more than SharePoint is SharePint, then you are definitely a SharePoint professional. You might be a SharePoint professional if your idea of name dropping is “Andrew Connell says…” or “According to Todd Klindt”… or even “Well, when I was stuck in a Turkish prison with Joel Oleson…”

    Read the article

  • Understanding Collabnet&rsquo;s LDAP binding

    - by Robert May
    We want to use both subversion usernames and passwords as well as Active Directory for our authentication on our Collabnet subversion server. This has proven to be more of a challenge than we thought, mostly because Collabnet’s documentation is pretty poor. To supplement that documentation, I add my own. The first thing to understand is that the attribute that you specify in the LDAP Login Attribute ONLY applies to lookups done for the user.  It does NOT apply to the LDAP Bind DN field.  Second, know that the debug logs (error is the one you want) don’t give you debug information for the bind DN, just the login attempts.  Third, by default, Active Directory does not allow anonymous binds, so you MUST put in a user that has the authority to query the Active Directory ldap. Because of these items, the values to set in those fields can be somewhat confusing.  You’ll want to have ADSI Edit handy (I also used ldp, which is installed by default on server 2008), since ADSI Edit can help you find stuff in your active directory.  Be careful, you can also break stuff. Here’s what should go into those fields. LDAP Security Level:  Should be set to None LDAP Server Host:  Should be set to the full name of a domain controller in your domain.  For example, dc.mydomain.com LDAP Server Port:  Should be set to 3268.  The default port of 389 will only query that specific server, not the global catalog.  By setting it to 3268, the global catalog will be queried, which is probably what you want. LDAP Base DN:  Should be set to the location where you want the search for users to begin.  By default, the search scope is set to sub, so all child organizational units below this setting will be searched.  In my case, I had created an OU specifically for users for group policies.  My value ended up being:  OU=MyOu,DC=domain,DC=org.   However, if you’re pointing it to the default Users folder, you may end up with something like CN=Users,DC=domain,DC=org (or com or whatever).  Again, use ADSI edit and use the Distinguished Name that it shows. LDAP Bind DN:  This needs to be the Distinguished Name of the user that you’re going to use for binding (i.e. the user you’ll be impersonating) for doing queries.  In my case, it ended up being CN=svn svn,OU=MyOu,DC=domain,DC=org.  Why the double svn, you might ask?  That’s because the first and last name fields are set to svn and by default, the distinguished name is the first and last name fields!  That’s important.  Its NOT the username or account name!  Again, use ADSI edit, browse to the username you want to use, right click and select properties, and then search the attributes for the Distinguished Name.  Once you’ve found that, select it and click View and you can copy and paste that into this field. LDAP Bind Password:  This is the password for the account in the Bind DN LDAP login Attribute: sAMAccountName.  If you leave this blank, uid is used, which may not even be set.  This tells it to use the Account Name field that’s defined under the account tab for users in Active Directory Users and Computers.  Note that this attribute DOES NOT APPLY to the LDAP Bind DN.  You must use the full distinguished name of the bind DN.  This attribute allows users to type their username and password for authentication, rather than typing their distinguished name, which they probably don’t know. LDAP Search Scope:  Probably should stay at sub, but could be different depending on your situation. LDAP Filter:  I left mine blank, but you could provide one to limit what you want to see.  LDP would be helpful for determining what this is. LDAP Server Certificate Verification:  I left it checked, but didn’t try it without it being checked. Hopefully, this will save some others pain when trying to get Collabnet setup. Technorati Tags: Subversion,collabnet

    Read the article

  • Need a MP3 ID3 tagger, and cover fetcher

    - by Kaustubh P
    I need to tag my MP3 library, and have tried kid3 (which was manual tagging), when I used Kubuntu 9.10 (I now use Ubunutu Meerkat) Here are the features I am hoping for: A good and clean UI. Tagging should be automatic, like Winamp's autotag feature, which rocks, btw! It should also embed the cover-art in the mp3, not copy a jpeg file in the folder, because now-a-days all players support displaying cover art. But acceptable if not possible. Rename the files as per some regular expression like %TrackNo - %Artist - %Title. Should be accurate, and more importantly smart. I want to start tagging at night, and hopefully my collection should be done by the morning, w/o it being stuck at a user prompt at 1%. If one app cant do all, I am willing to use 3, wouldn't mind exposure to a few more apps ;) I have used picard or someting, and I didnt like it quite a lot. But wouldn't mind using it, if there is no other alternative. Thanks for your time!

    Read the article

  • Add game mechanics through equipment?

    - by Sidar
    In a game with different weapons and armor that actually affect more than just player stats, how would you achieve such effect? (These are just examples not concrete ideas ) For example we could have a handgun, uzi and then you have the graviton-gun. The first two would just shoot bullets, the third one does more than just shoot a simple projectile. It could allow the player to hold an enemy and drag it to use it as a meat shield. The player could also wear generic armor but at some point wears armor that can absorb projectiles. After absorbing enough projectiles you can shoot a giant blast. All these weapons/armor have different "behaviors" that either just raise stats or actually add new mechanics. In a simple case most guns would have similar properties and changing a few settings would create a new weapon (handgun shoots at an interval of x amount of seconds, lower this number and you have a machinegun). This obviously does not work if you intend to do more than just shoot projectiles. I'm pretty much stuck on writing the interface structure. While weapons and armor have different purposes they should both be able to process certain effects that change or add mechanics in the game world.

    Read the article

  • Social Engineering approach to collecting from deadbeat ebay winners

    - by Malcolm Anderson
    You just sold something on e-bay and now the winner won't pay up.  What do you do?  I'm not sure what the legality of this kind of Social Engineering hack is, but I believe you've got to give it points for elegance.   Here's the link to the lifehacker.com post (I can't find the original Reddit post.) Reddit user "BadgerMatt" (we'll call him Matt for short) recently posted a story about how he tried to sell tickets to a sporting event on eBay, but when the auction was won the winning bidder backed out of the deal. In some cases this is mainly an inconvenience and you can re-list the item, but Matt was selling tickets to a sporting event and no longer had the time to do that. With the losing bidders uninterested in the tickets, he was going to end up stuck with tickets he couldn't use and a deadbeat bidder who was unwilling to honor their contract. Rather than give up, Matt decided to trick her into paying: I created a new eBay account, "Payback" we'll call it, and sent her a message: "Hi there, I noticed you won an auction for 4 [sporting event] tickets. I meant to bid on these but couldn't get to a computer. I wanted to take my son and dad and would be willing to give you $1,000 for the tickets. I imagine that you've already made plans to attend, but I figured it was worth a shot." The woman agreed, but for $1,100. She paid for the auction, received the tickets, and then Matt (of course) never re-purchased them. Needless to say, the woman was angry. Perhaps it was the wrong thing for the right reasons, but I'm mostly jealous I never thought of it back when I still sold things on eBay.

    Read the article

  • Jack Audio ubuntu 12.10

    - by Shaneo1
    I used to have Jack Server working with 10.10, 11.04, 11.10 but not 12.04 and now 12.10. I have installed jackd jackd2 qjackctl surfed many forums and even given advice of how to get jack working, but now I am stuck. Tue Nov 27 22:30:46 2012: Saving settings to "/home/shane/.config/jack/conf.xml" ... 22:31:19.960 D-BUS: JACK server could not be started. Sorry Cannot connect to server socket err = No such file or directory Cannot connect to server request channel jack server is not running or cannot be started Tue Nov 27 22:31:19 2012: Starting jack server... Tue Nov 27 22:31:19 2012: JACK server starting in realtime mode with priority 10 Tue Nov 27 22:31:19 2012: [1m[31mERROR: cannot register object path "/org/freedesktop/ReserveDevice1/Audio0": A handler is already registered for /org/freedesktop/ReserveDevice1/Audio0[0m Tue Nov 27 22:31:19 2012: [1m[31mERROR: Failed to acquire device name : Audio0 error : A handler is already registered for /org/freedesktop/ReserveDevice1/Audio0[0m Tue Nov 27 22:31:19 2012: [1m[31mERROR: Audio device hw:0,0 cannot be acquired...[0m Tue Nov 27 22:31:19 2012: [1m[31mERROR: Cannot initialize driver[0m Tue Nov 27 22:31:19 2012: [1m[31mERROR: JackServer::Open failed with -1[0m Tue Nov 27 22:31:19 2012: [1m[31mERROR: Failed to open server[0m Tue Nov 27 22:31:21 2012: Saving settings to "/home/shane/.config/jack/conf.xml" ... 22:31:22.047 Could not connect to JACK server as client. - Overall operation failed. - Unable to connect to server. Please check the messages window for more info. Cannot connect to server socket err = No such file or directory Cannot connect to server request channel jack server is not running or cannot be started Can anyone assist?

    Read the article

  • Increase the size of Taskbar Preview Thumbnails in Windows 7

    - by Matthew Guay
    Taskbar thumbnail previews are incredibly useful in Windows 7, but for some users they may be too small.  Here’s a tool to help you make your taskbar thumbnail previews just like you want them. A few years ago we featured a tool to increase the size of your thumbnail previews in Windows Vista, but unfortunately this application doesn’t work correctly in Windows 7.  However, there is a new tool for Windows 7 that lets you customize your taskbar thumbnail previews even more in Windows 7.  With it, you can change almost anything about your taskbar thumbnail previews.  The default taskbar thumbnails are nice, but may be too small for users with vision problems or with very high resolution monitors.  Whatever your need, this is a great tool to make the thumbnails looks and work just like you want. Let’s get started Download the Windows 7 Taskbar Thumbnail Customizer (link below), and unzip the files.  Run the Windows 7 Taskbar Thumbnail Customizer when you’re done.  Simply double-click on it; you don’t need to run it as administrator. Now, you change the size, spacing, margin, and delay time of your taskbar thumbnails.  The Delay Time setting is very handy; to speed things up, we set it to 0 so there’s no delay between when you mouse-over a taskbar icon to when you see the thumbnail.  Simply drag the slider to the size (or time in the delay settings) you want, and click Apply settings.  Windows Explorer will automatically restart, and your new taskbar thumbnails will be ready to use. Here is the default Windows 7 thumbnail preview of a video playing in Media player: And here’s the taskbar thumbnail enlarged to 380px.  Now you can really watch a video from your taskbar thumbnail. The larger taskbar thumbnails show up a little different in Internet Explorer.  It shows a larger preview of your active tab, and smaller previews of your other tabs.  Notice also that Aero peek shows the tab you’re hovering over in Internet Explorer, but the tab name in IE’s toolbar doesn’t change to the one you’re previewing.   Here we increased the width between the thumbnails, while keeping the thumbnails at their default size.  This could be useful if you have trouble selecting the correct preview, and we can imagine it would be a very useful modification on touch screens. And, if you ever take your changes too far, and want to revert to your default Windows 7 taskbar thumbnail previews, simply run the Customizer again and select Restore Defaults.  Windows Explorer will restart again, and your taskbar thumbnails will be back to their default settings.   Conclusion This tool makes it safe and easy to change the size, spacing, and more of your taskbar thumbnail previews.  And since you can always revert to the default settings, you can experiment without fear of messing up your computer.  If you’d prefer to change the settings manually without using a dedicated application, here’s a list of the registry changes you can make to accomplish this by hand. Link Download the Windows 7 Taskbar Thumbnail Customizer from The Windows Club Vista Users: Increase Size of Windows Vista Taskbar Previews Similar Articles Productive Geek Tips Bounty(Paid!) for Increasing Windows Vista Taskbar Preview SizeGet Vista Taskbar Thumbnail Previews in Windows XPVista Style Popup Previews for Firefox TabsIncrease Size of Windows Vista Taskbar PreviewsWhat is dwm.exe And Why Is It Running? TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Are You Blocked On Gtalk? Find out Discover Latest Android Apps On AppBrain The Ultimate Guide For YouTube Lovers Will it Blend? iPad Edition Penolo Lets You Share Sketches On Twitter Visit Woolyss.com for Old School Games, Music and Videos

    Read the article

  • Running ODI 11gR1 Standalone Agent as a Windows Service

    - by fx.nicolas
    ODI 11gR1 introduces the capability to use OPMN to start and protect agent processes as services. Setting up the OPMN agent is covered in the following post and extensively in the ODI Installation Guide. Unfortunately, OPMN is not installed along with ODI, and ODI 10g users who are really at ease with the old Java Wrapper are a little bit puzzled by OPMN, and ask: "How can I simply set up the agent as a service?". Well... although the Tanuki Service Wrapper is no longer available for free, and the agentservice.bat script lost, you can switch to another service wrapper for the same result. For example, Yet Another Java Service Wrapper (YAJSW) is a good candidate. To configure a standalone agent with YAJSW: download YAJSW Uncompress the zip to a folder (called %YAJSW% in this example) Configure, start and test your standalone agent. Make sure that this agent is loaded with all the required libraries and drivers, as the service will not load dynamically the drivers added subsequently in the /drivers directory. Retrieve the PID of the agent process: Open Task Manager. Select View Select Columns Select the PID (Process Identifier) column, then click OK In the list of processes, find the java.exe process corresponding to your agent, and note its PID. Open a command line prompt in %YAJSW%/bat and run: genConfig.bat <your_pid> This command generates a wrapper configuration file for the agent. This file is called %YAJSW%/conf/wrapper.conf. Stop your agent. Edit the wrapper.conf file and modify the configuration of your service. For example, modify the display name and description of the service as shown in the example below. Important: Make sure to escape the commas in the ODI encoded passwords with a backslash! In the example below, the ODI_SUPERVISOR_ENCODED_PASS contained a comma character which had to be prefixed with a backslash. # Title to use when running as a console wrapper.console.title=\"AGENT\" #******************************************************************** # Wrapper Windows Service and Posix Daemon Properties #******************************************************************** # Name of the service wrapper.ntservice.name=AGENT_113 # Display name of the service wrapper.ntservice.displayname=ODI Agent # Description of the service wrapper.ntservice.description=Oracle Data Integrator Agent 11gR3 (11.1.1.3.0) ... # Escape the comma in the password with a backslash. wrapper.app.parameter.7 = -ODI_SUPERVISOR_ENCODED_PASS=fJya.vR5kvNcu9TtV\,jVZEt Execute your wrapped agent as console by calling in the command line prompt: runConsole.bat Check that your agent is running, and test it again.This command starts the agent with the configuration but does not install it yet as a service. To Install the agent as service call installService.bat From that point, you can view, start and stop the agent via the windows services. Et voilà ! Two final notes: - To modify the agent configuration, you must uninstall/reinstall the service. For this purpose, run the uninstallService.bat to uninstall it and play again the process above. - To be able to uninstall the agent service, you should keep a backup of the wrapper.conf file. This is particularly important when starting several services with the wrapper.

    Read the article

  • How to solve CUDA crash when run CUDA example fluidsGL?

    - by sam
    I use ubuntu 12.04 64 bits with GTX560Ti. I install CUDA by following instruction: wget http: //developer.download.nvidia.com/compute/cuda/4_2/rel/toolkit/cudatoolkit_4.2.9_lin ux_64_ubuntu11.04.run wget http: //developer.download.nvidia.com/compute/cuda/4_2/rel/drivers/devdriver_4.2_linux _64_295.41.run wget http: //developer.download.nvidia.com/compute/cuda/4_2/rel/sdk/gpucomputingsdk_4.2.9 _linux.run chmod +x cudatoolkit_4.2.9_linux_64_ubuntu11.04.run sudo ./cudatoolkit_4.2.9_linux_64_ubuntu11.04.run echo "/usr/local/cuda/lib64" > ~/cuda.conf echo "/usr/local/cuda/lib" >> ~/cuda.conf sudo mv ~/cuda.conf /etc/ld.so.conf.d/cuda.conf sudo ldconfig echo 'export PATH=$PATH:/usr/local/cuda/bin' >> ~/.bashrc chmod +x gpucomputingsdk_4.2.9_linux.run ./gpucomputingsdk_4.2.9_linux.run sudo apt-get install build-essential libx11-dev libglu1-mesa-dev freeg lut3-dev libxi-dev libxmu-dev gcc-4.4 g++-4.4 sed 's/g++ -fPIC/g++-4.4 -fPIC/g' ~/NV IDIA_GPU_Computing_SDK/C/common/common.mk > ~/NVIDIA_GPU_Computing_SDK/C/common/common.mk.bak; mv ~/NVIDIA_GPU_Computing_SDK/C/common/common.mk.bak ~/NVIDIA_GPU_Computing_SDK/C/common/common.mk sed 's/gcc -fPIC/gcc-4.4 -fPIC/g' ~/NV IDIA_GPU_Computing_SDK/C/common/common.mk > ~/NVIDIA_GPU_Computing_SDK/C/common/common.mk.bak; mv ~/NVIDIA_GPU_Computing_SDK/C/common/common.mk.bak ~/NVIDIA_GPU_Computing_SDK/C/common/common.mk sed 's/-L$(SHAREDDIR)\/lib/-L$(SHAREDDIR)\/lib -L\/u sr\/lib\/nvidia-current/g' ~/NVIDIA_GPU_Computing_SDK/C/common/common.mk > ~/NVIDIA_GPU_Computing_SDK/C/common/common.mk.bak; mv ~/NVIDIA_GPU_Computing_SDK/C/common/common.mk.bak ~/NVIDIA_GPU_Computing_SDK/C/common/common.mk sed 's/-L$(SHAREDDIR)\/lib -L\/usr\/lib\/nvidia-current $(NV CUVIDLIB)/-L$(SHAREDDIR)\/lib $(NVCUVIDLIB)/g' ~/NVIDIA_GPU_Computing_SDK/C/common/common.mk > ~/NVIDIA_GPU_Computing_SDK/C/common/common.mk.bak; mv ~/NVIDIA_GPU_Computing_SDK/C/common/common.mk.bak ~/NVIDIA_GPU_Computing_SDK/C/common/common.mk After I run ~/NVIDIA_GPU_Computing_SDK/C/bin/linux/release/./fluidsGL It got stuck even mouse or keyboard couldn't move. How to solve it? Thank you~

    Read the article

  • Sucky MSTest and the "WaitAll for multiple handles on a STA thread is not supported" Error

    - by Anne Bougie
    If you are doing any multi-threading and are using MSTest, you will probably run across this error. For some reason, MSTest by default runs in STA threading mode. WTF, Microsoft! Why so stuck in the old COM world?  When I run the same test using NUnit, I don't have this problem. Unfortunately, my company has chosen MSTest, so I have a lot of testing problems. NUnit is so much better, IMO. After determining that I wasn't referencing any unmanaged code that would flip the thread into STA, which can also cause this error, the only thing left was the testing suite I was using. I dug around a little and found this obscure setting for the Test Run Config settings file that you can't set using its interface. You have to open it up as a text file and add the following setting:  <ExecutionThread apartmentState="MTA" /> This didn't break any other tests, so I'm not sure why it's not the default, or why there is nothing in the test run configuration app to change this setting. Here is the code I was testing:  public void ProcessTest(ProcessInfo[] infos) {    WaitHandle[] waits = new WaitHandle[infos.Length];    int i = 0;    foreach (ProcessInfo info in infos)    {       AutoResetEvent are = new AutoResetEvent(false);       info.Are = are;       waits[i++] = are;         Processor pr = new Processor();       WaitCallback callback = pr.ProcessTest;       ThreadPool.QueueUserWorkItem(callback, info);    }      WaitHandle.WaitAll(waits); }

    Read the article

  • TFS 2010 Server Name Change

    - by PearlFactory
    So I thought I would  change the name of my machine so that the other devs can find the TFS server easily. TFS 2005 would use the cool cmd line util tfsadminutil.....alas he is now gone HERE Are the steps to complete Edit the web.config and is usually located on default install C:\Program Files\Microsoft Team Foundation Server 2010\Application Tier\Web Services\web.config <add key="applicationDatabase" value="Data Source=JUSTIN\SQLI01;Initial Catalog=Tfs_Configuration;Integrated Security=True;" /> Next step is to edit previous Solutions/Projects 1) Open the Solution file i.e ProductApp.sln 2) Edit the SccTeamFoundationServer URL under Global section i.e Change this to new name   If you have DB server on same machine ...you will need to go in and remove existing db user account assigned to the tfs DB Remove old [%machine_name%] value i.e Tuned_Dev_PC_12\Justin user from the above DBs No add the new Justin\Justin user account associated with the new machine name to the TFS & Reporing dbs ... dbo or the TFSADMIN & TFSEXEC roles either will do in this case. (or add both ) Now either ReApply user or add New account (remove old account i.e Tuned_Dev_PC_12\justin) If DB permisions are setup correctyly you will get a screen that looks like this   If it pauses or gets stuck you need to look back at the adding correct DB Perms to the i.e JUSTIN\Justin user account Also if your project is still complaining about old TFS name 1) Team\Connect new Team Foundation Server 2) Add\Remove TFS 3) Add New TFS Name  Once you have connected to the new TFS server Reload your project from TFS..this way it removes a lot of the bugs that hang around in the local project\solution This is similar to a VSS2005 and older fix Cheers ( eta about 60-90 mins so weigh up the the need vs payoff. ) Shutdown restart

    Read the article

< Previous Page | 186 187 188 189 190 191 192 193 194 195 196 197  | Next Page >