Search Results

Search found 20031 results on 802 pages for 'full outer join'.

Page 153/802 | < Previous Page | 149 150 151 152 153 154 155 156 157 158 159 160  | Next Page >

  • SQL Server Editions and Integration Services

    The SQL Server 2005 and SQL Server 2008 product family has quite a few editions now, so what does this mean for SQL Server Integration Services? Starting from the bottom we have the free edition known as Express, and the entry level Workgroup edition, as well as the new Web edition. None of these three include the full SSIS product, but they do all include the SQL Server Import and Export Wizard, with access to basic data sources but nothing more, so for simple loading and extraction of data this should suffice. You will not be able to build packages though, this is just a one shot deal aimed at using the wizard on an ad-hoc basis. To get the full power of Integration Services you need to start with Standard edition. This includes the BI Development Studio, for building your own packages, and fully functional IDE integrated into Visual Studio. (You get the full VS 2005/2008 IDE with the product). All core functions will be available but with a restricted set of transformations and tasks. The SQL Server 2005 Features Comparison or Features Supported by the Editions of SQL Server 2008 describes standard edition as having basic transforms, compared to Enterprise which includes the advanced transforms. I think basic is a little harsh considering the power you get with Standard, but the advanced covers the truly ground-breaking capabilities of data mining, text mining and cleansing or fuzzy transforms. The power of performing these operations within your ETL pipeline should not be underestimated, but not all processes will require these capabilities, so it seems like a reasonable delineation. Thankfully there are no feature limitations or artificial governors within Standard compared to Enterprise. The same control flow and data flow engines underpin both editions, with the same configuration and deployment options allowing you to work seamlessly between environments and editions if using the common components. In fact there are no govenors at all in SSIS, so whilst the SQL Database engine is limited to 4 CPUs in Standard edition, SSIS is only limited by the base operating system. The advanced transforms only available with Enterprise edition: Data Mining Training Destination Data Mining Query Component Fuzzy Grouping Fuzzy Lookup Term Extraction Term Lookup Dimension Processing Destination Partition Processing Destination The advanced tasks only available with Enterprise edition: Data Mining Query Task So in summary, if you want SQL Server Integration Services, you need SQL Server Standard edition, and for the more advanced tasks and transforms you need SQL Server Enterprise edition. To recap, the answer to the often asked question is no, SQL Server Integration Services is not available in SQL Server Express or Workgroup editions.

    Read the article

  • Guide to reduce TFS database growth using the Test Attachment Cleaner

    - by terje
    Recently there has been several reports on TFS databases growing too fast and growing too big.  Notable this has been observed when one has started to use more features of the Testing system.  Also, the TFS 2010 handles test results differently from TFS 2008, and this leads to more data stored in the TFS databases. As a consequence of this there has been released some tools to remove unneeded data in the database, and also some fixes to correct for bugs which has been found and corrected during this process.  Further some preventive practices and maintenance rules should be adopted. A lot of people have blogged about this, among these are: Anu’s very important blog post here describes both the problem and solutions to handle it.  She describes both the Test Attachment Cleaner tool, and also some QFE/CU releases to fix some underlying bugs which prevented the tool from being fully effective. Brian Harry’s blog post here describes the problem too This forum thread describes the problem with some solution hints. Ravi Shanker’s blog post here describes best practices on solving this (TBP) Grant Holidays blogpost here describes strategies to use the Test Attachment Cleaner both to detect space problems and how to rectify them.   The problem can be divided into the following areas: Publishing of test results from builds Publishing of manual test results and their attachments in particular Publishing of deployment binaries for use during a test run Bugs in SQL server preventing total cleanup of data (All the published data above is published into the TFS database as attachments.) The test results will include all data being collected during the run.  Some of this data can grow rather large, like IntelliTrace logs and video recordings.   Also the pushing of binaries which happen for automated test runs, including tests run during a build using code coverage which will include all the files in the deployment folder, contributes a lot to the size of the attached data.   In order to handle this systematically, I have set up a 3-stage process: Find out if you have a database space issue Set up your TFS server to minimize potential database issues If you have the “problem”, clean up the database and otherwise keep it clean   Analyze the data Are your database( s) growing ?  Are unused test results growing out of proportion ? To find out about this you need to query your TFS database for some of the information, and use the Test Attachment Cleaner (TAC) to obtain some  more detailed information. If you don’t have too many databases you can use the SQL Server reports from within the Management Studio to analyze the database and table sizes. Or, you can use a set of queries . I find queries often faster to use because I can tweak them the way I want them.  But be aware that these queries are non-documented and non-supported and may change when the product team wants to change them. If you have multiple Project Collections, find out which might have problems: (Disclaimer: The queries below work on TFS 2010. They will not work on Dev-11, since the table structure have been changed.  I will try to update them for Dev-11 when it is released.) Open a SQL Management Studio session onto the SQL Server where you have your TFS Databases. Use the query below to find the Project Collection databases and their sizes, in descending size order.  use master select DB_NAME(database_id) AS DBName, (size/128) SizeInMB FROM sys.master_files where type=0 and substring(db_name(database_id),1,4)='Tfs_' and DB_NAME(database_id)<>'Tfs_Configuration' order by size desc Doing this on one of our SQL servers gives the following results: It is pretty easy to see on which collection to start the work   Find out which tables are possibly too large Keep a special watch out for the Tfs_Attachment table. Use the script at the bottom of Grant’s blog to find the table sizes in descending size order. In our case we got this result: From Grant’s blog we learnt that the tbl_Content is in the Version Control category, so the major only big issue we have here is the tbl_AttachmentContent.   Find out which team projects have possibly too large attachments In order to use the TAC to find and eventually delete attachment data we need to find out which team projects have these attachments. The team project is a required parameter to the TAC. Use the following query to find this, replace the collection database name with whatever applies in your case:   use Tfs_DefaultCollection select p.projectname, sum(a.compressedlength)/1024/1024 as sizeInMB from dbo.tbl_Attachment as a inner join tbl_testrun as tr on a.testrunid=tr.testrunid inner join tbl_project as p on p.projectid=tr.projectid group by p.projectname order by sum(a.compressedlength) desc In our case we got this result (had to remove some names), out of more than 100 team projects accumulated over quite some years: As can be seen here it is pretty obvious the “Byggtjeneste – Projects” are the main team project to take care of, with the ones on lines 2-4 as the next ones.  Check which attachment types takes up the most space It can be nice to know which attachment types takes up the space, so run the following query: use Tfs_DefaultCollection select a.attachmenttype, sum(a.compressedlength)/1024/1024 as sizeInMB from dbo.tbl_Attachment as a inner join tbl_testrun as tr on a.testrunid=tr.testrunid inner join tbl_project as p on p.projectid=tr.projectid group by a.attachmenttype order by sum(a.compressedlength) desc We then got this result: From this it is pretty obvious that the problem here is the binary files, as also mentioned in Anu’s blog. Check which file types, by their extension, takes up the most space Run the following query use Tfs_DefaultCollection select SUBSTRING(filename,len(filename)-CHARINDEX('.',REVERSE(filename))+2,999)as Extension, sum(compressedlength)/1024 as SizeInKB from tbl_Attachment group by SUBSTRING(filename,len(filename)-CHARINDEX('.',REVERSE(filename))+2,999) order by sum(compressedlength) desc This gives a result like this:   Now you should have collected enough information to tell you what to do – if you got to do something, and some of the information you need in order to set up your TAC settings file, both for a cleanup and for scheduled maintenance later.    Get your TFS server and environment properly set up Even if you have got the problem or if have yet not got the problem, you should ensure the TFS server is set up so that the risk of getting into this problem is minimized.  To ensure this you should install the following set of updates and components. The assumption is that your TFS Server is at SP1 level. Install the QFE for KB2608743 – which also contains detailed instructions on its use, download from here. The QFE changes the default settings to not upload deployed binaries, which are used in automated test runs. Binaries will still be uploaded if: Code coverage is enabled in the test settings. You change the UploadDeploymentItem to true in the testsettings file. Be aware that this might be reset back to false by another user which haven't installed this QFE. The hotfix should be installed to The build servers (the build agents) The machine hosting the Test Controller Local development computers (Visual Studio) Local test computers (MTM) It is not required to install it to the TFS Server, test agents or the build controller – it has no effect on these programs. If you use the SQL Server 2008 R2 you should also install the CU 10 (or later).  This CU fixes a potential problem of hanging “ghost” files.  This seems to happen only in certain trigger situations, but to ensure it doesn’t bite you, it is better to make sure this CU is installed. There is no such CU for SQL Server 2008 pre-R2 Work around:  If you suspect hanging ghost files, they can be – with some mental effort, deduced from the ghost counters using the following SQL query: use master SELECT DB_NAME(database_id) as 'database',OBJECT_NAME(object_id) as 'objectname', index_type_desc,ghost_record_count,version_ghost_record_count,record_count,avg_record_size_in_bytes FROM sys.dm_db_index_physical_stats (DB_ID(N'<DatabaseName>'), OBJECT_ID(N'<TableName>'), NULL, NULL , 'DETAILED') The problem is a stalled ghost cleanup process.  Restarting the SQL server after having stopped all components that depends on it, like the TFS Server and SPS services – that is all applications that connect to the SQL server. Then restart the SQL server, and finally start up all dependent processes again.  (I would guess a complete server reboot would do the trick too.) After this the ghost cleanup process will run properly again. The fix will come in the next CU cycle for SQL Server R2 SP1.  The R2 pre-SP1 and R2 SP1 have separate maintenance cycles, and are maintained individually. Each have its own set of CU’s. When it comes I will add the link here to that CU. The "hanging ghost file” issue came up after one have run the TAC, and deleted enourmes amount of data.  The SQL Server can get into this hanging state (without the QFE) in certain cases due to this. And of course, install and set up the Test Attachment Cleaner command line power tool.  This should be done following some guidelines from Ravi Shanker: “When you run TAC, ensure that you are deleting small chunks of data at regular intervals (say run TAC every night at 3AM to delete data that is between age 730 to 731 days) – this will ensure that small amounts of data are being deleted and SQL ghosted record cleanup can catch up with the number of deletes performed. “ This rule minimizes the risk of the ghosted hang problem to occur, and further makes it easier for the SQL server ghosting process to work smoothly. “Run DBCC SHRINKDB post the ghosted records are cleaned up to physically reclaim the space on the file system” This is the last step in a 3 step process of removing SQL server data. First they are logically deleted. Then they are cleaned out by the ghosting process, and finally removed using the shrinkdb command. Cleaning out the attachments The TAC is run from the command line using a set of parameters and controlled by a settingsfile.  The parameters point out a server uri including the team project collection and also point at a specific team project. So in order to run this for multiple team projects regularly one has to set up a script to run the TAC multiple times, once for each team project.  When you install the TAC there is a very useful readme file in the same directory. When the deployment binaries are published to the TFS server, ALL items are published up from the deployment folder. That often means much more files than you would assume are necessary. This is a brute force technique. It works, but you need to take care when cleaning up. Grant has shown how their settings file looks in his blog post, removing all attachments older than 180 days , as long as there are no active workitems connected to them. This setting can be useful to clean out all items, both in a clean-up once operation, and in a general There are two scenarios we need to consider: Cleaning up an existing overgrown database Maintaining a server to avoid an overgrown database using scheduled TAC   1. Cleaning up a database which has grown too big due to these attachments. This job is a “Once” job.  We do this once and then move on to make sure it won’t happen again, by taking the actions in 2) below.  In this scenario you should only consider the large files. Your goal should be to simply reduce the size, and don’t bother about  the smaller stuff. That can be left a scheduled TAC cleanup ( 2 below). Here you can use a very general settings file, and just remove the large attachments, or you can choose to remove any old items.  Grant’s settings file is an example of the last one.  A settings file to remove only large attachments could look like this: <!-- Scenario : Remove large files --> <DeletionCriteria> <TestRun /> <Attachment> <SizeInMB GreaterThan="10" /> </Attachment> </DeletionCriteria> Or like this: If you want only to remove dll’s and pdb’s about that size, add an Extensions-section.  Without that section, all extensions will be deleted. <!-- Scenario : Remove large files of type dll's and pdb's --> <DeletionCriteria> <TestRun /> <Attachment> <SizeInMB GreaterThan="10" /> <Extensions> <Include value="dll" /> <Include value="pdb" /> </Extensions> </Attachment> </DeletionCriteria> Before you start up your scheduled maintenance, you should clear out all older items. 2. Scheduled maintenance using the TAC If you run a schedule every night, and remove old items, and also remove them in small batches.  It is important to run this often, like every night, in order to keep the number of deleted items low. That way the SQL ghost process works better. One approach could be to delete all items older than some number of days, let’s say 180 days. This could be combined with restricting it to keep attachments with active or resolved bugs.  Doing this every night ensures that only small amounts of data is deleted. <!-- Scenario : Remove old items except if they have active or resolved bugs --> <DeletionCriteria> <TestRun> <AgeInDays OlderThan="180" /> </TestRun> <Attachment /> <LinkedBugs> <Exclude state="Active" /> <Exclude state="Resolved"/> </LinkedBugs> </DeletionCriteria> In my experience there are projects which are left with active or resolved workitems, akthough no further work is done.  It can be wise to have a cleanup process with no restrictions on linked bugs at all. Note that you then have to remove the whole LinkedBugs section. A approach which could work better here is to do a two step approach, use the schedule above to with no LinkedBugs as a sweeper cleaning task taking away all data older than you could care about.  Then have another scheduled TAC task to take out more specifically attachments that you are not likely to use. This task could be much more specific, and based on your analysis clean out what you know is troublesome data. <!-- Scenario : Remove specific files early --> <DeletionCriteria> <TestRun > <AgeInDays OlderThan="30" /> </TestRun> <Attachment> <SizeInMB GreaterThan="10" /> <Extensions> <Include value="iTrace"/> <Include value="dll"/> <Include value="pdb"/> <Include value="wmv"/> </Extensions> </Attachment> <LinkedBugs> <Exclude state="Active" /> <Exclude state="Resolved" /> </LinkedBugs> </DeletionCriteria> The readme document for the TAC says that it recognizes “internal” extensions, but it does recognize any extension. To run the tool do the following command: tcmpt attachmentcleanup /collection:your_tfs_collection_url /teamproject:your_team_project /settingsfile:path_to_settingsfile /outputfile:%temp%/teamproject.tcmpt.log /mode:delete   Shrinking the database You could run a shrink database command after the TAC has run in cases where there are a lot of data being deleted.  In this case you SHOULD do it, to free up all that space.  But, after the shrink operation you should do a rebuild indexes, since the shrink operation will leave the database in a very fragmented state, which will reduce performance. Note that you need to rebuild indexes, reorganizing is not enough. For smaller amounts of data you should NOT shrink the database, since the data will be reused by the SQL server when it need to add more records.  In fact, it is regarded as a bad practice to shrink the database regularly.  So on a daily maintenance schedule you should NOT shrink the database. To shrink the database you do a DBCC SHRINKDATABASE command, and then follow up with a DBCC INDEXDEFRAG afterwards.  I find the easiest way to do this is to create a SQL Maintenance plan including the Shrink Database Task and the Rebuild Index Task and just execute it when you need to do this.

    Read the article

  • Fan not working on thinkpad L430, laptop overheating

    - by Dirk B.
    I'm having problems controlling the fan of my Lenovo Thinkpad L430. The fan doesn't start. Without any fan control installed the fan just doesn't run. If I run stress, it does run a little, but it's nowhere near the speed it should be. After a while, the laptop just overheats and stops. I Tried to install tp-fancontrol, and enabled thinkpad_acpi fancontrol=1, but to no avail. If I try to set the fan speed manually, it doesn't start up. In windows, there's a program called TPFanControl. It turns out that this laptop uses a different scheme to control the fan than other thinkpads. The level runs from 0 to 255, and max = 0 and min=255. Now I'm looking for a fan control program that works for linux. Does anyone know if it actually exists? Anyone with any experience on fan control on a L430? Update: sudo pwmconfig gives the following output: # pwmconfig revision 5857 (2010-08-22) This program will search your sensors for pulse width modulation (pwm) controls, and test each one to see if it controls a fan on your motherboard. Note that many motherboards do not have pwm circuitry installed, even if your sensor chip supports pwm. We will attempt to briefly stop each fan using the pwm controls. The program will attempt to restore each fan to full speed after testing. However, it is ** very important ** that you physically verify that the fans have been to full speed after the program has completed. Found the following devices: hwmon0 is acpitz hwmon1/device is coretemp hwmon2/device is thinkpad Found the following PWM controls: hwmon2/device/pwm1 hwmon2/device/pwm1 is currently setup for automatic speed control. In general, automatic mode is preferred over manual mode, as it is more efficient and it reacts faster. Are you sure that you want to setup this output for manual control? (n) y Giving the fans some time to reach full speed... Found the following fan sensors: hwmon2/device/fan1_input current speed: 0 ... skipping! There are no working fan sensors, all readings are 0. Make sure you have a 3-wire fan connected. You may also need to increase the fan divisors. See doc/fan-divisors for more information. regards, Dirk

    Read the article

  • Translating with Google Translate without API and C# Code

    - by Rick Strahl
    Some time back I created a data base driven ASP.NET Resource Provider along with some tools that make it easy to edit ASP.NET resources interactively in a Web application. One of the small helper features of the interactive resource admin tool is the ability to do simple translations using both Google Translate and Babelfish. Here's what this looks like in the resource administration form: When a resource is displayed, the user can click a Translate button and it will show the current resource text and then lets you set the source and target languages to translate. The Go button fires the translation for both Google and Babelfish and displays them - pressing use then changes the language of the resource to the target language and sets the resource value to the newly translated value. It's a nice and quick way to get a quick translation going. Ch… Ch… Changes Originally, both implementations basically did some screen scraping of the interactive Web sites and retrieved translated text out of result HTML. Screen scraping is always kind of an iffy proposition as content can be changed easily, but surprisingly that code worked for many years without fail. Recently however, Google at least changed their input pages to use AJAX callbacks and the page updates no longer worked the same way. End result: The Google translate code was broken. Now, Google does have an official API that you can access, but the API is being deprecated and you actually need to have an API key. Since I have public samples that people can download the API key is an issue if I want people to have the samples work out of the box - the only way I could even do this is by sharing my API key (not allowed).   However, after a bit of spelunking and playing around with the public site however I found that Google's interactive translate page actually makes callbacks using plain public access without an API key. By intercepting some of those AJAX calls and calling them directly from code I was able to get translation back up and working with minimal fuss, by parsing out the JSON these AJAX calls return. I don't think this particular Warning: This is hacky code, but after a fair bit of testing I found this to work very well with all sorts of languages and accented and escaped text etc. as long as you stick to small blocks of translated text. I thought I'd share it in case anybody else had been relying on a screen scraping mechanism like I did and needed a non-API based replacement. Here's the code: /// <summary> /// Translates a string into another language using Google's translate API JSON calls. /// <seealso>Class TranslationServices</seealso> /// </summary> /// <param name="Text">Text to translate. Should be a single word or sentence.</param> /// <param name="FromCulture"> /// Two letter culture (en of en-us, fr of fr-ca, de of de-ch) /// </param> /// <param name="ToCulture"> /// Two letter culture (as for FromCulture) /// </param> public string TranslateGoogle(string text, string fromCulture, string toCulture) { fromCulture = fromCulture.ToLower(); toCulture = toCulture.ToLower(); // normalize the culture in case something like en-us was passed // retrieve only en since Google doesn't support sub-locales string[] tokens = fromCulture.Split('-'); if (tokens.Length > 1) fromCulture = tokens[0]; // normalize ToCulture tokens = toCulture.Split('-'); if (tokens.Length > 1) toCulture = tokens[0]; string url = string.Format(@"http://translate.google.com/translate_a/t?client=j&text={0}&hl=en&sl={1}&tl={2}", HttpUtility.UrlEncode(text),fromCulture,toCulture); // Retrieve Translation with HTTP GET call string html = null; try { WebClient web = new WebClient(); // MUST add a known browser user agent or else response encoding doen't return UTF-8 (WTF Google?) web.Headers.Add(HttpRequestHeader.UserAgent, "Mozilla/5.0"); web.Headers.Add(HttpRequestHeader.AcceptCharset, "UTF-8"); // Make sure we have response encoding to UTF-8 web.Encoding = Encoding.UTF8; html = web.DownloadString(url); } catch (Exception ex) { this.ErrorMessage = Westwind.Globalization.Resources.Resources.ConnectionFailed + ": " + ex.GetBaseException().Message; return null; } // Extract out trans":"...[Extracted]...","from the JSON string string result = Regex.Match(html, "trans\":(\".*?\"),\"", RegexOptions.IgnoreCase).Groups[1].Value; if (string.IsNullOrEmpty(result)) { this.ErrorMessage = Westwind.Globalization.Resources.Resources.InvalidSearchResult; return null; } //return WebUtils.DecodeJsString(result); // Result is a JavaScript string so we need to deserialize it properly JavaScriptSerializer ser = new JavaScriptSerializer(); return ser.Deserialize(result, typeof(string)) as string; } To use the code is straightforward enough - simply provide a string to translate and a pair of two letter source and target languages: string result = service.TranslateGoogle("Life is great and one is spoiled when it goes on and on and on", "en", "de"); TestContext.WriteLine(result); How it works The code to translate is fairly straightforward. It basically uses the URL I snagged from the Google Translate Web Page slightly changed to return a JSON result (&client=j) instead of the funky nested PHP style JSON array that the default returns. The JSON result returned looks like this: {"sentences":[{"trans":"Das Leben ist großartig und man wird verwöhnt, wenn es weiter und weiter und weiter geht","orig":"Life is great and one is spoiled when it goes on and on and on","translit":"","src_translit":""}],"src":"en","server_time":24} I use WebClient to make an HTTP GET call to retrieve the JSON data and strip out part of the full JSON response that contains the actual translated text. Since this is a JSON response I need to deserialize the JSON string in case it's encoded (for upper/lower ASCII chars or quotes etc.). Couple of odd things to note in this code: First note that a valid user agent string must be passed (or at least one starting with a common browser identification - I use Mozilla/5.0). Without this Google doesn't encode the result with UTF-8, but instead uses a ISO encoding that .NET can't easily decode. Google seems to ignore the character set header and use the user agent instead which is - odd to say the least. The other is that the code returns a full JSON response. Rather than use the full response and decode it into a custom type that matches Google's result object, I just strip out the translated text. Yeah I know that's hacky but avoids an extra type and firing up the JavaScript deserializer. My internal version uses a small DecodeJsString() method to decode Javascript without the overhead of a full JSON parser. It's obviously not rocket science but as mentioned above what's nice about it is that it works without an Google API key. I can't vouch on how many translates you can do before there are cut offs but in my limited testing running a few stress tests on a Web server under load I didn't run into any problems. Limitations There are some restrictions with this: It only works on single words or single sentences - multiple sentences (delimited by .) are cut off at the ".". There is also a length limitation which appears to happen at around 220 characters or so. While that may not sound  like much for typical word or phrase translations this this is plenty of length. Use with a grain of salt - Google seems to be trying to limit their exposure to usage of the Translate APIs so this code might break in the future, but for now at least it works. FWIW, I also found that Google's translation is not as good as Babelfish, especially for contextual content like sentences. Google is faster, but Babelfish tends to give better translations. This is why in my translation tool I show both Google and Babelfish values retrieved. You can check out the code for this in the West Wind West Wind Web Toolkit's TranslationService.cs file which contains both the Google and Babelfish translation code pieces. Ironically the Babelfish code has been working forever using screen scraping and continues to work just fine today. I think it's a good idea to have multiple translation providers in case one is down or changes its format, hence the dual display in my translation form above. I hope this has been helpful to some of you - I've actually had many small uses for this code in a number of applications and it's sweet to have a simple routine that performs these operations for me easily. Resources Live Localization Sample Localization Resource Provider Administration form that includes options to translate text using Google and Babelfish interactively. TranslationService.cs The full source code in the West Wind West Wind Web Toolkit's Globalization library that contains the translation code. © Rick Strahl, West Wind Technologies, 2005-2011Posted in CSharp  HTTP   Tweet (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Play a Complete HTML5 Version of Super Mario Bros. Online for Free

    - by Akemi Iwaya
    If you love playing Super Mario Brothers, but hate the hassle of dealing with or setting up the game console, then you will be pleased to know a new and complete version is now available to play online. Josh Goldberg has worked hard to recreate the classic game in its entirety in HTML5, so sit back, relax, and get ready to enjoy all that Mario goodness via your favorite browser. There are three ‘modes’ of game play available: play through reproductions of the original classic levels, test yourself against randomly generated levels, or use the level editor to create custom levels. Special Note: There are two online versions available…one for playing in Google Chrome and one for playing in all other browsers. For our example we chose to use the non-Chrome version. Play Full Screen Mario [For All Other Browsers] Play Full Screen Mario [Google Chrome Version] [via CNET News]     

    Read the article

  • Visual Studio IntelliSense for URL Rewrite

    - by OWScott
    Visual Studio doesn’t have IntelliSense support for URL Rewrite by default.  This isn’t a show stopper since it doesn’t result in stop errors. However, it’s nice to have full IntelliSense support and to get rid of the warnings for URL Rewrite rules. RuslanY has released a Visual Studio schema update for URL Rewrite 2.0 which is available as a free quick download.  The installation instructions (they are quick and easy) can be found here, which also include the schema for URL Rewrite 1.1.   The install takes effect immediately without restarting Visual Studio. A side question commonly comes up.  Can you get URL Rewrite support for Visual Studio Web Server (aka Cassini).  The answer is no.  To get URL Rewrite support in your development environment, use IIS7.  You can set your Visual Studio projects to use IIS7 though, so you can have full debug, F5 or Ctrl-F5 support for IIS.

    Read the article

  • Backup options in SharePoint 2007

    - by sreejukg
    It is very important to make sure the server farm backup is taking properly, making sure that in case of any disaster, the administrator has the latest backup that can be used to restore. This articles addresses some of the options available for backup/restore in SharePoint 2007 Backup There are two options that can be utilized to take backup of SharePoint sites. Using SharePoint Central Administration website Using SharePoint central administration website, you can do backup/restore from user interface. Using central administration website you can back up the following · Server farm · Web application · Content databases Follow these steps to take backup of the server farm using central administration 1. Open Central administration website 2. Navigate to Operations -> Backup and Restore -> Perform a backup 3. Here you will have options to choose the item to back up. Select Farm (the top most item in the list) 4. Once you select the items to backup, click on “Continue to backup options” 5. Select “Full” as type of backup. 6. In the backup file location, enter the path where you need to store the backup. The path should be according to the UNC, for e.g. for c drive you may use \\server\c$\mybackupFolder 7. Click ok 8. Now you will be redirected to Backup and Restore Status page. This page shows the progress for the backup operation. You can use the refresh button to update the status of backup(this page will automatically refresh in every 30 seconds). Once completed you can find the files in the specified folder. Using STSADM website SharePoint comes with a STSADM command line tool. STSADM provides lot of administrative operations that can be performed on SharePoint 2007 sites. You can find STSADM command from the following location C:\Program Files\Common Files\Microsoft shared\web server extensions\12\bin (You may change the drive letter according to your installation) STSADM provides a method for performing the Office SharePoint Server 2007 administration tasks at the command line or by using batch files or scripts. STSADM provides access to operations not available by using the Central Administration site The general syntax for STSADM is as follows STSADM -operation Operation Name –parameter1 value1 –parameter2 value2 ……….. Using STSADM you can back up the following · Server farm · Web application · Content databases To perform any STSADM, operation you need to be a member of administrators group. Follow these steps to take backup of SharePoint server farm using STSADM tool. Note: make sure you are logged in to the computer where central administration website is installed. 1. Open the Command prompt (You should run command prompt with administrator privileges) 2. Change the working directory to C:\Program Files\Common Files\Microsoft shared\web server extensions\12\bin 3. Enter the command, then press enter Stsadm –o backup -directory <UNC path> -backupmethod full 4. You will get success / failure message once the command finishes. How to schedule the backup There is no option to schedule a backup using central administration site. Also there is no operation provided by STSADM to automate the backup. The farm administrators need to take backup in regular intervals. To achieve this, you can write a batch file that includes STSADM command to take full backup of the server. This batch file can be scheduled using windows task scheduler to execute in certain intervals. Sample of the batch file 1. Open notepad(or any other text editor) 2. Enter the following commands @echo off echo =============================================================== echo Back up the farm to <C:\backup> echo =============================================================== cd %COMMONPROGRAMFILES%\Microsoft Shared\web server extensions\12\BIN @echo off stsadm.exe -o backup -directory "<\backup>" -backupmethod full echo completed 3. Save the file with .bat extension You can schedule this batch file as you require. Other Options Using STSADM tool, you will be able to take backup for individual site collection. The syntax for this is stsadm -o backup -url <URL name for site collection> -filename <file name> [-overwrite] The explanations for the parameters are as follows. -url The url of the site collection you need to backup -filename The name of the backup file. E.g. c:\backup.bak -overwrite optional. Indicates if the filename specified exists, whether to overwrite or not. If you are creating the batch file for scheduling the backup for a site collection, you may need to specify the backup filename automatically created. It is an option that you can generate the filename with date so that you can keep backup for each day. e.g. The following commands can be utilized create a site collection backup. @echo off echo =============================================================== echo Back up the farm to <C:\backup> echo =============================================================== echo =============================================================== echo getting todays date to a variable echo =============================================================== @For /F "tokens=1,2,3 delims=/ " %%A in (‘Date /t’) do @( Set Day=%%A Set Month=%%B Set Year=%%C Set todayDate=%%C%%B%%A ) cd %COMMONPROGRAMFILES%\Microsoft Shared\web server extensions\12\BIN @echo off stsadm -o backup -url <sitecollection url> -filename \\ServerName\ShareName\Backup_%todayDate%.bak -overwrite echo completed To read more about backup STSADM operation, read this http://technet.microsoft.com/en-us/library/cc263441.aspx

    Read the article

  • Lenovo Thinkpad L430 overheating due to fan problems

    - by Dirk B.
    This is the same question as Fan not working on thinkpad L430, laptop overheating, but that question has been marked as a duplicate, which it is not, and I cannot reopen it. I'm having problems controlling the fan of my Lenovo Thinkpad L430. The fan doesn't start. Without any fan control installed the fan just doesn't run. If I run stress, it does run a little, but it's nowhere near the speed it should be. After a while, the laptop just overheats and stops. I Tried to install tp-fancontrol, and enabled thinkpad_acpi fancontrol=1, but to no avail. If I try to set the fan speed manually, it doesn't start up. In windows, there's a program called TPFanControl. It turns out that this laptop uses a different scheme to control the fan than other thinkpads. The level runs from 0 to 255, and max = 0 and min=255. Now I'm looking for a fan control program that works for linux. Does anyone know if it actually exists? Anyone with any experience on fan control on a L430? Update: sudo pwmconfig gives the following output: # pwmconfig revision 5857 (2010-08-22) This program will search your sensors for pulse width modulation (pwm) controls, and test each one to see if it controls a fan on your motherboard. Note that many motherboards do not have pwm circuitry installed, even if your sensor chip supports pwm. We will attempt to briefly stop each fan using the pwm controls. The program will attempt to restore each fan to full speed after testing. However, it is ** very important ** that you physically verify that the fans have been to full speed after the program has completed. Found the following devices: hwmon0 is acpitz hwmon1/device is coretemp hwmon2/device is thinkpad Found the following PWM controls: hwmon2/device/pwm1 hwmon2/device/pwm1 is currently setup for automatic speed control. In general, automatic mode is preferred over manual mode, as it is more efficient and it reacts faster. Are you sure that you want to setup this output for manual control? (n) y Giving the fans some time to reach full speed... Found the following fan sensors: hwmon2/device/fan1_input current speed: 0 ... skipping! There are no working fan sensors, all readings are 0. Make sure you have a 3-wire fan connected. You may also need to increase the fan divisors. See doc/fan-divisors for more information. update: If you need it, lspci is available here

    Read the article

  • DIY Arcade Build Packed into an IKEA Console Table

    - by Jason Fitzpatrick
    If you checked out the Raspberry Pi-powered arcade table we shared earlier this week but want an all-in-one solution that doesn’t require as much configuration, this table uses a pre-programmed board that comes loaded with arcade classics. Courtesy of tinker Casper36, we’re treated to a compact build hidden inside an IKEA console table. One of the most polished aspects of this build is how well hidden the flush-mounted screen is under the dark glass tabletop–when the screen it just looks like the table has a patterned glass insert. Hit up the link below for the full photo build-log. IKEA Console Arcade Build [via Make] 6 Ways Windows 8 Is More Secure Than Windows 7 HTG Explains: Why It’s Good That Your Computer’s RAM Is Full 10 Awesome Improvements For Desktop Users in Windows 8

    Read the article

  • Inside NASA’s Shuttle Trainer

    - by Jason Fitzpatrick
    After more than 30 years of service, NASA has retired their full-scale shuttle training simulator. Take a photo tour and learn where you can visit the trainer and crawl around inside for a more hands-on experience. The trainer is currently on display at the Charles Simonyi Space Gallery at the Museum of Flight in Seattle, Washington. For those of us unable to visit the trainer in person, Wired Magazine has a full photo tour at the link below. Get Inside the Replica that Trained Every Shuttle Astronaut [Wired] Why Does 64-Bit Windows Need a Separate “Program Files (x86)” Folder? Why Your Android Phone Isn’t Getting Operating System Updates and What You Can Do About It How To Delete, Move, or Rename Locked Files in Windows

    Read the article

  • Best SEO practices for mobile URLs: 301, rel=canonical, or something else?

    - by Chris
    I am developing a site with a mobile version and am trying to figure the appropriate way to manage the URLs for search engines. So far I've considered: Having a mobile site with rel="canonical" links to the regular site. Putting both the mobile site and full site on one URL, and doing user agent sniffing. Another opinion: Spencer: "If you have a mobile site at a separate location or URL, you should 301 redirect each and every mobile page to its corresponding page on your main website. Employ user agent detection so that the mobile optimized version is served up if someone's coming in from a hand-held. - http://developer.practicalecommerce.com/articles/1722-Mobile-site-Development-Best-Practices-for-SEO-Usability Both 2 and 3 make it hard for a user who wants to switch to the full site or mobile site manually, but I'm not sure 1 is the best alternative. What's the best way to write URLs for a mobile site?

    Read the article

  • Pure Front end JavaScript with Web API versus MVC views with ajax

    - by eyeballpaul
    This was more a discussion for what peoples thoughts are these days on how to split a web application. I am used to creating an MVC application with all its views and controllers. I would normally create a full view and pass this back to the browser on a full page request, unless there were specific areas that I did not want to populate straight away and would then use DOM page load events to call the server to load other areas using AJAX. Also, when it came to partial page refreshing, I would call an MVC action method which would return the HTML fragment which I could then use to populate parts of the page. This would be for areas that I did not want to slow down initial page load, or areas that fitted better with AJAX calls. One example would be for table paging. If you want to move on to the next page, I would prefer it if an AJAX call got that info rather than using a full page refresh. But the AJAX call would still return an HTML fragment. My question is. Are my thoughts on this archaic because I come from a .net background rather than a pure front end background? An intelligent front end developer that I work with, prefers to do more or less nothing in the MVC views, and would rather do everything on the front end. Right down to web API calls populating the page. So that rather than calling an MVC action method, which returns HTML, he would prefer to return a standard object and use javascript to create all the elements of the page. The front end developer way means that any benefits that I normally get with MVC model validation, including client side validation, would be gone. It also means that any benefits that I get with creating the views, with strongly typed html templates etc would be gone. I believe this would mean I would need to write the same validation for front end and back end validation. The javascript would also need to have lots of methods for creating all the different parts of the DOM. For example, when adding a new row to a table, I would normally use the MVC partial view for creating the row, and then return this as part of the AJAX call, which then gets injected into the table. By using a pure front end way, the javascript would would take in an object (for, say, a product) for the row from the api call, and then create a row from that object. Creating each individual part of the table row. The website in question will have lots of different areas, from administration, forms, product searching etc. A website that I don't think requires to be architected in a single page application way. What are everyone's thoughts on this? I am interested to hear from front end devs and back end devs.

    Read the article

  • How to perform regular expression based replacements on files with MSBuild

    - by Daniel Cazzulino
    And without a custom DLL with a task, too . The example at the bottom of the MSDN page on MSBuild Inline Tasks already provides pretty much all you need for that with a TokenReplace task that receives a file path, a token and a replacement and uses string.Replace with that. Similar in spirit but way more useful in its implementation is the RegexTransform in NuGet’s Build.tasks. It’s much better not only because it supports full regular expressions, but also because it receives items, which makes it very amenable to batching (applying the transforms to multiple items). You can read about how to use it for updating assemblies with a version number, for example. I recently had a need to also supply RegexOptions to the task so I extended the metadata and a little bit of the inline task so that it can parse the optional flags. So when using the task, I can pass the flags as item metadata as follows:...Read full article

    Read the article

  • What is a name for a job where you do system analysis, project management and data diagramming?

    - by David Archer
    In the last 4 months I've been able to manage a team and step away from the coding for a bit. I've been planning the system in full (both System Analysis and project managing, alongside action and data diagramming) writing the technical documentation, the code's architecture, keeping track of the other guys doing the actual coding, QA, bug reports and dealing with clients. I had to take two days' training on node.js just to see if it would be suitable for a project we were considering. Is there a name for this job? Project Manager and Systems Architect don't quite seem to have the same stuff, and IT manager seems way off. I only want to know so that I can get some qualification towards it and try to move into this kind of work full-time.

    Read the article

  • DIY Coffee Table Arcade Hides Retro Gaming Inside

    - by Jason Fitzpatrick
    Last week we showed you a nifty man-cave arcade-in-coffee-table build that was a bit, shall we say, exposed. If you’re looking for a sleek build that conceals its arcade-heart until it’s game time, this clean and concealed build is for you. Courtesy of IKEAHacker reader Sam Wang, the beauty of this build is that other than the rectangle of black glass in the center of the table–which could just as well be a design accent–there is no indication that the coffee table is a gaming machine when not in use. Slide out the drawers and boot it up, however, and you’re in business–full MAME arcade emulation at your finger tips. Hit up the link below to check out his full photo build guide. My DIY Arcade Machine Coffee Table [via IKEAHacker] How To Switch Webmail Providers Without Losing All Your Email How To Force Windows Applications to Use a Specific CPU HTG Explains: Is UPnP a Security Risk?

    Read the article

  • Mobile redirect strategy

    - by Kevin
    Looking for help on deciding how to redirect users to a mobile optimized version of my site (m.mysite.com). Looking at two methods: Server configuration (.htaccess or even varnish) Webapp (php) The problem I see with #1 is with the "view full site" link on the mobile site. If a user clicks that link and they go to mysite.com won't the server just redirect them back to m.mysite.com? For #2 I could create a cookie that is checked in addition to the user agent. Any suggestions/comments? Is there a better way to "remember" if the user clicked "visit full site"? Thanks, Kevin

    Read the article

  • Speaking at SQLRelay. Will you be there?

    - by jamiet
    SQL Relay (#sqlrelay) is fast approaching and I wanted to take this opportunity to tell you a little about it.SQL Relay is a 5-day tour around the UK that is taking in five Server Server user groups, each one comprising a full day of SQL Server related learnings. The dates and venues are:21st May, Edinburgh22nd May, Manchester23rd May, Birmingham24th May, Bristol30th May, LondonClick on the appropriate link to see the full agenda and to book your spot.SQL Relay features some of this country's most prominent SQL Server speakers including Chris Webb, Tony Rogerson, Andrew Fryer, Martin Bell, Allan Mitchell, Steve Shaw, Gordon Meyer, Satya Jayanty, Chris Testa O'Neill, Duncan Sutcliffe, Rob Carrol, me and SQL Server UK Product Manager Morris Novello so I really encourage you to go - you have my word it'll be an informative and, more importantly, enjoyable day out from your regular 9-to-5.I am presenting my session "A Lap Around the SSIS Catalog" at Edinburgh and Manchester so if you're going, I hope to see you there.@Jamiet

    Read the article

  • Can't recognize local webserver

    - by Syed Khalil-ur-Rehman
    My Internet Cable provider has set up a web server which hosts different entertainment material like movies, songs, tv shows and games etc. While using windows the pc recognises it as a local web server and downloads files with full LAN speed of 10 mb per second. On the contrary when using Ubuntu I am only able to download the files on my Internet speed not more than 100 kb per second. What ever I try ubuntu does not recognizes the webserver as a local area network web server but as a normal internet website. How to make Ubuntu download files from this server with full LAN speed. Please help in this regard. The url is http://dmasti.pk and yes it is a web server browsable by a web browser like firefox or ie.

    Read the article

  • Play PlayStation Games on a Rooted Nook Simple Touch

    - by Jason Fitzpatrick
    Just when you feel like you’ve seen it all, some guy comes along and shows you how he can play original PlayStation games on his ebook reader. Check out the video to see the surprisingly full-speed–albeit black and white–graphics in action. The secret sauce in Sean’s cool setup? He’s rooted the device and installed Free PlayStation Emulator (FPSE) on it–along with the NoRefresh hack–to enjoy touch-screen controls and PS emulation. The whole thing is shockingly smooth; once you get past the choppy intro videos, the games run at full speed. [via Hack A Day] HTG Explains: Why Do Hard Drives Show the Wrong Capacity in Windows? Java is Insecure and Awful, It’s Time to Disable It, and Here’s How What Are the Windows A: and B: Drives Used For?

    Read the article

  • DIY Standing Desk Sports Super Sturdy Galvanized Pipe Legs

    - by Jason Fitzpatrick
    If you’re looking for a standing desk sturdy enough you can tap dance on it this DIY creation features thick pipe legs and a solid oak desktop. Courtesy of designer Jessica Allen, this standing desk can easily support a bank of monitors, heavy equipment, and even your entire body if need be, thanks to a sturdy galvanized plumbing pipe undercarriage and a 1″ thick oak top. We love the clean lines of the desk but we’d be tempted to clutter them up a little with a tower-rack mounted under the desk or on the inside of the thick pipe legs. Hit up the link below to check out the full build log. Have a cool standing desk (or desk tutorial) to share? Sound off in the comments. Steel Pipe Standing Desk HTG Explains: Why Screen Savers Are No Longer Necessary 6 Ways Windows 8 Is More Secure Than Windows 7 HTG Explains: Why It’s Good That Your Computer’s RAM Is Full

    Read the article

< Previous Page | 149 150 151 152 153 154 155 156 157 158 159 160  | Next Page >