Search Results

Search found 10033 results on 402 pages for 'execution speed'.

Page 217/402 | < Previous Page | 213 214 215 216 217 218 219 220 221 222 223 224  | Next Page >

  • Adding expire headers to content served from CDN?

    - by mdolon
    I'm using MaxCDN to serve content to my blog using W3 Total Cache. The problem I'm running into when evaluating my site using Google Page Speed and YSlow! is that expire headers are not being sent on content delivered from the CDN, nor are they coming from a cookieless domain. Is this something that is completely in the hands of my CDN or is it something I can fix using my server configuration? Some info about my setup: nginx with php-fastcgi wordpress 3.0 w3 total cache 0.9a (dev release) MaxCDN the site: http://devgrow.com/

    Read the article

  • Hybrid HDD/SSD Volume Windows

    - by ccrama
    Is it possible to create a hybrid drive-like experience using an ssd and a hdd? In theory, frequently used files would be stored on the faster ssd, and larger/less used would be moved to the hdd. I know you can create shared volumes in windows, but the speed differences would create some issues I would assume, and it doesn't have drive preference for what gets stored on what drive. Any programs/info/advice would be great! Thanks! (PS. I saw this question, but it was asked three years ago and some solutions have probably changed)

    Read the article

  • Wait for linux machine to be rebooted

    - by Theo
    I have a small script to install on my remote machine an update. I would like to reboot the machine remotely and if it is rebooted, continue with some more commands. What I currently do is: ssh root@myMachine << COMMANDS_ISSUED ###... Tasks init 6 COMMANDS_ISSUED sleep 180s ssh root@myMachine << POST_REBOOT_COMMANDS ###.... More stuff POST_REBOOT_COMMANDS Is there a more elegant way to do it? Like pinging the machine all 5 seconds up to a maximum of 4 minutes? I play with a few linux machines which have different boot up times and if my script would continue immediately after reboot, this could safe quite some time for me. (Note: I don't want to parallelize execution over all machines as I want to see for each machine if everything worked fine)

    Read the article

  • How can I check that I'm not frying the chipset on my 945 gclf?

    - by Journeyman Geek
    I'm running an old 945 gclf as a home server. Its an odd little system which has a passively cooled single core atom 230 processor, and a small heatsink/fan combo over the chipset. lm-sensors/sensors detects one temperature sensor which I guess is the processor. coretemp-isa-0000 Adapter: ISA adapter Core 0: +38.0°C (crit = +100.0°C) The fan on it is making rather horrible noises and I am considering unplugging it or using a speed reducer. There's also a risk it may fail anyway. Is there any way I can monitor the temperatures of the chipset or be warned of any issues with it? Is there anything specific I can watch out for that would indicate trouble?

    Read the article

  • Windows Server: Do I really need servers in remote locations?

    - by IMAbev
    I have one main site with several servers an a 2008/2012 environment. I have 4 remote sites that are physically close (a few miles apart) and are all connected to the main site by 20meg fiber on a private network. At each of the remote locations I have a windows server that users log in to and where their files and apps are located. There are many considerations to answering this question. But the first thing I am wondering is do I really need a server at each location? Users are just logging in to this server for permissions and a vast majority of my users are only using word, excel and email. I am really interested in figuring out if I need servers at these locations. $3,000 to $4,000 per server every 3-5 years, licensing, administration... I know there are other considerations - speed, redundancy, if my link to the main site goes down the users have nothing. But I just am not convinced I need servers at these locations.

    Read the article

  • Lighttpd Rewrites & Blank page

    - by Stathis
    Hello, I have configured some lighttpd rewrites, of which one does not work. This is the line that does not work as it should and causes a white (blank) page to be thrown: url.rewrite-once = ( ... "^/search/([^\/]+)*/([^\/]+)*/([0-9]+)$" => "search.php?t=$1&k=$2&p=$3", ... ); Also note that it is the only one with 3 parameters, all the rest in the section have 0-2. I found this error in the lighttpd error.log: 2011-01-07 17:13:09: (mod_rewrite.c.374) execution error while matching: -8 Can someone help? Thanks.

    Read the article

  • Setup a local file server for two networks

    - by rzlines
    Hi I would like to setup a local linux file server (using centos and samba) that can be accessed by two independent local networks in the same house. I have 2 networks on the same floor which have Windows 7 computers, but the networks are split as we have 2 internet connections. How do I go about this? Additional info: Currently both the networks use DropBox to send files to each other but that happens via the Internet and hence its slow. Would like to achieve the same locally to increase the speed.

    Read the article

  • Websites not working with Opera 11.00, major bugs

    - by s hanley
    Sites like ebay and even superuser stop working properly when I use opera 11.00. Menus stop working everywhere from ebay to godaddy. Hovering on a menu item doesn't expand it, no sub menu slides out. This makes a large number of very popular websites unusable. Am I right in assuming this is a javascript issue? I use opera for the turbo feature (I have tested opera with and without turbo so it's not turbo's fault) because I'm on mobile broadband until I get my phone line sorted out. Turbo helps me save money, as well as allowing me to surf at a sane speed. Is there a firefox or chrome equivalent to opera turbo that doesn't cost money? I'm using Opera 11.00, build 1156.

    Read the article

  • Running a cronjob

    - by Ed01
    've been puzzling over cronjobs for the last few hours. I've read documentation and examples. I understand the basics and concepts, but haven't gotten anything to work. So I would appreciate some help with this total noob dilemma. The ultimate goal is to schedule the execution of a django function every day. Before I get that far, I want to know that I can schedule any old script to run, first once, then on a regular basis. So I want to: 1) Write a simple script (perhaps a bash script) that will allow me to determine that yes, it did indeed run successfully, or that it failed. 2) schedule this script to run at the top of the hour I tried writing a bash script that simple output some text to the terminal: #!/bin/bash echo "The script ran" Then I dropped this into a .txt file MAILTO = *****.******@gmail.com 05 * * * * /home/vadmin/development/test.sh But nothing happened. I'm sure I did many things wrong. Where do I start to fix all of this?

    Read the article

  • How to set a character set per application in *nix?

    - by SimmaDoWN
    I am attempting to set a character set of IBM850 on slackware linux for a particular application (epic5). Im using rxvt-unicode and have setup LANG/LC_*=en_US. Now if I set the encoding to IBM850 in kde's konsole program im able to display certain characters correctly. I'd rather not use IBM850 for everything; is there a way to set/alias a command for a per application execution? Ive tried things like: LC_CTYPE=IBM850 epic5 LC_ALL=IBM850 epic5 No success. Any help would be appreciated

    Read the article

  • How many site visitors can my server handle? [closed]

    - by coolboycsaba
    I have a website, and I want to host it on my own computer, but I'm wondering if it's good enough. The website checks if the user is logged in and then displays 15 items (title, description) from a mysql database and the rating (stored in another database) and the comments (another database) for each item. It also displays some stats (number of items, comments). I also have an image for each item. My specs are: AMD Athlon 64 X2 Dual Core Processor 5600+ 2.90 GHz RAM: 4.00GB Windows 7 64bit So what do you think, how much visitors and items could it handle (at once or daily) ? My internet connection is good, around 7-10 mb upload and same download speed

    Read the article

  • SCCM? Overkill?

    - by Le_Quack
    T. technician in a high school with around 1600 students 250 staff and 800+ client computers mostly running W7 I'm looking for a better way to manage clients (deploy software, track changes, inventory etc) I like the look of SCCM 2012 features but the case studies seem to be aimed at large multi-site infrastructural rather than a single mid sized site. Is SCCM suitable for a mid sized single site or is it aimed at much larger corporations, if so what would be more suitable Just a note about me and my situation. I work as a technician in a school part of a team of 3. My boss seems content with a network that works (just about) not a productive well maintained network that is easy to run and maintain. I'm still fairly early on in my I.T. career so sorry if I'm not up to speed on all products. EDIT: Thanks for all the help I'll take a look at SCE and SCCM and get some proposals drawn up to take to my boss/deputy head

    Read the article

  • I am making a maze type of game using javascript and HTML and need some questions answered [on hold]

    - by Timothy Bilodeau
    First off, i am a noob to JavaScript but am willing to learn. :) I found a simple JavaScript moment engine created by another member on this site. Using that i made it so my character can walk around within a rectangle/square shaped room. I want to make it so the character can walk through a "doorway" within a wall to the next room. Either that or make it so if the character moves over a certain image within the room it will take the player to another webpage in which the character "spawns" into the room and so on and so fourth. Here is a link to what i have made so far as to get an idea. http://bit.ly/1fSMesA Any help would be much appreciated. Here is the javascript code for the character movement and boundaries. <script type='text/javascript'> // movement vars var xpos = 100; var ypos = 100; var xspeed = 1; var yspeed = 0; var maxSpeed = 5; // boundary var minx = 37; var miny = 41; var maxx = 187; // 10 pixels for character's width var maxy = 178; // 10 pixels for character's width // controller vars var upPressed = 0; var downPressed = 0; var leftPressed = 0; var rightPressed = 0; function slowDownX() { if (xspeed > 0) xspeed = xspeed - 1; if (xspeed < 0) xspeed = xspeed + 1; } function slowDownY() { if (yspeed > 0) yspeed = yspeed - 1; if (yspeed < 0) yspeed = yspeed + 1; } function gameLoop() { // change position based on speed xpos = Math.min(Math.max(xpos + xspeed,minx),maxx); ypos = Math.min(Math.max(ypos + yspeed,miny),maxy); // or, without boundaries: // xpos = xpos + xspeed; // ypos = ypos + yspeed; // change actual position document.getElementById('character').style.left = xpos; document.getElementById('character').style.top = ypos; // change speed based on keyboard events if (upPressed == 1) yspeed = Math.max(yspeed - 1,-1*maxSpeed); if (downPressed == 1) yspeed = Math.min(yspeed + 1,1*maxSpeed) if (rightPressed == 1) xspeed = Math.min(xspeed + 1,1*maxSpeed); if (leftPressed == 1) xspeed = Math.max(xspeed - 1,-1*maxSpeed); // deceleration if (upPressed == 0 && downPressed == 0) slowDownY(); if (leftPressed == 0 && rightPressed == 0) slowDownX(); // loop setTimeout("gameLoop()",10); } function keyDown(e) { var code = e.keyCode ? e.keyCode : e.which; if (code == 38) upPressed = 1; if (code == 40) downPressed = 1; if (code == 37) leftPressed = 1; if (code == 39) rightPressed = 1; } function keyUp(e) { var code = e.keyCode ? e.keyCode : e.which; if (code == 38) upPressed = 0; if (code == 40) downPressed = 0; if (code == 37) leftPressed = 0; if (code == 39) rightPressed = 0; } </script> here is the HTML code to follow <!-- The Level --> <img src="room1.png" /> <!-- The Character --> <img id='character' src='../texture packs/characters/snazgel.png' style='position:absolute;left:100;top:100;height:40;width:26;'/>

    Read the article

  • SCCM - How to make new deployed applications appear in Software Center faster?

    - by icehac
    I am trying to make the process between SCCM deployments and the Software Center (configmgr) faster, if not seamless. Right now, applications generally take about 1-2 hours to populate properly. However, by going to the "Configuration Manager" under the Windows Control Panel, there is an "Actions" tab. Generally 5 minutes after running these "Actions", the software will populate inside the Software Center. The downside of this is the user interaction with the "Actions" pane...I can't have a user going through this process when they request a new application that needs to be deployed through SCCM. I have have played around with using "net stop ccmexec" and "net start ccmexec" to manually run all of these "Actions" on the start command, but it feels a bit archaic. Does anyone have any suggestions how to speed this process up? I feel there is something simple I am missing.

    Read the article

  • Memory Speeds: 1x4GB or 2x2GB? [closed]

    - by Dasutin
    When it comes to speeds what is faster having one 4GB module in your system or having two 2GB modules. I'm not taking in the fact that the system could have dual channel capabilities. Also what about a server environment? Would it be better to have one large, high density module or break it up into several modules for speed and price? I heard an engineer at my office having a discussion with an employee. He said that its better in all situations to have one large capacity modules instead of breaking it up. It would be cheaper and perform faster. He also said it would take longer for the computer to access what it needed if there were more modules instead of having just one. His explanation didn't seem right to me and I thought I would post this question here to see what other people thought.

    Read the article

  • Webserver - Memory-bound or CPU-bound? [closed]

    - by JJP
    Possible Duplicate: How do you do Load Testing and Capacity Planning for Web Sites I'm installing a social network using Zend Framework & MySql, with lots of plugins & queries. I want Webserver & Sql server on one box. I'm trying to choose between two machines (on hetzner.de): A) intel i7-2600 3.4 GHz 16 GB DDR3 RAM B) intel i7-920 2.6 GHz 24 GB DDR3 RAM B has 50% more RAM but 30% slower clock speed. Q is: is it obvious where the bottleneck will be? Would I ever need 24GB of RAM, even with lots of concurrent users?

    Read the article

  • Desktop Fun: Battlestar Galactica Wallpapers

    - by Asian Angel
    Are you feeling nostalgic and/or sad now that the Battlestar Galactica series has finished up? Now you can add a bit of that Galactica goodness to your desktop with our Battlestar Galactica Wallpaper collection. If the image links fail for some reason you can download the entire set as a zipped file here. Note: Click on the picture to see the full-size image—these wallpapers vary in size so you may need to crop, stretch, or place them on a colored background in order to best match them to your screen’s resolution. For more fun wallpapers be certain to visit our new Desktop Fun section. If you are looking for some great icons to go with your new Battlestar Galactica wallpaper make certain to check out our Sci-Fi Icon Packs collection here. Similar Articles Productive Geek Tips Desktop Customization: Sci-Fi Icon PacksWindows 7 Welcome Screen Taking Forever? Here’s the Fix (Maybe)Desktop Fun: Starship Theme WallpapersDesktop Fun: Underwater Theme WallpapersDesktop Fun: Starscape Theme Wallpapers TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows Tomorrow is Mother’s Day Check the Average Speed of YouTube Videos You’ve Watched OutlookStatView Scans and Displays General Usage Statistics How to Add Exceptions to the Windows Firewall Office 2010 reviewed in depth by Ed Bott FoxClocks adds World Times in your Statusbar (Firefox)

    Read the article

  • OTN Developer Days - Calgary, Alberta March 18 & Atlanta, GA April 1

    - by dana.singleterry
    Discover a Faster Way to Develop Ajax -Enabled Application Based on Java and SOA Standards Get Hands-on with Oracle Jdeveloper, Oracle Application Developer Framework and Oracle Fusion Middleware 11g. You are invited to attend Oracle Technology Network (OTN) Developer Day, a free, hands-on workshop that will give you insight into how to create Ajax-enabled rich Web user interfaces and Java EE-based SOA services with ease. We'll introduce you to the development platform Oracle is using for its Fusion enterprise applications, and show you how to get up to speed with it. The workshop will get you started developing with the latest versions of Oracle JDeveloper and Oracle ADF 11g, including the Ajax-enabled ADF Faces rich client components. Thursday, March 18, 2010 8:00 a.m. - 5:00 p.m. Calgary Marriott hotel 110 9th Avenue, SE Calgary, Alberta T2G 5A6 Wednesday, April 1, 2010 8:00 a.m. - 5:00 p.m. Four Seasons Hotel Atlanta 75 Fourteenth Street Atlanta, Georgia 30309 This workshop is designed for developers, project managers, and architects. Whether you are currently using Java, traditional 4GL tools like Oracle Forms, PeopleTools, and Visual Basic, or just looking for a better development platform - this session is for you. Get explanation from Oracle experts, try your hands at actual development, and get a chance to win an Apple iPod Touch and Oracle prizes. Come see how Oracle can help you deliver cutting edge UIs and standard -based applications faster with the Oracle Fusion Development software stack. At this event you will: * Get to know the Oracle Fusion development architecture and strategy from Oracle's experts. * Learn the easy way to extend your existing development skill sets to incorporate new technologies and architectures that include Service-Oriented Architecture, Java EE, and Web 2.0 * Participate in hands-on labs and experience new technologies in a familiar and productive development environment with Oracle experts guidance. Click on the Register Now Calgary, Alberta to register for the Calgary event and click on the Register Now Atlanta, GA to register for the Atlanta FREE events. Don't miss your exclusive opportunity to network with your peers and discuss today's most vital application development topics with Oracle experts.

    Read the article

  • SQL SERVER – Four Posts on Removing the Bookmark Lookup – Key Lookup

    - by pinaldave
    In recent times I have observed that not many people have proper understanding of what is bookmark lookup or key lookup. Increasing numbers of the questions tells me that this is something developers are encountering every single day but have no idea how to deal with it. I have previously written three articles on this subject. I want to point all of you looking for further information on the same post. SQL SERVER – Query Optimization – Remove Bookmark Lookup – Remove RID Lookup – Remove Key Lookup SQL SERVER – Query Optimization – Remove Bookmark Lookup – Remove RID Lookup – Remove Key Lookup – Part 2 SQL SERVER – Query Optimization – Remove Bookmark Lookup – Remove RID Lookup – Remove Key Lookup – Part 3 SQL SERVER – Interesting Observation – Execution Plan and Results of Aggregate Concatenation Queries In one of my recent class we had in depth conversation about what are the alternative of creating covering indexes to remove the bookmark lookup. I really want to this question open to all of you and see what community thinks about the same. Is there any other way then creating covering index or included index to remove his expensive keylookup? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Backup and Restore, SQL Index, SQL Optimization, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLAuthority News, SQLServer, T SQL, Technology

    Read the article

  • SQL SERVER – Subquery or Join – Various Options – SQL Server Engine Knows the Best – Part 2

    - by pinaldave
    This blog post is part 2 of the earlier written article SQL SERVER – Subquery or Join – Various Options – SQL Server Engine knows the Best by Paulo R. Pereira. Paulo has left excellent comment to earlier article once again proving the point that SQL Server Engine is smart enough to figure out the best plan itself and uses the same for the query. Let us go over his comment as he has posted. “I think IN or EXISTS is the best choice, because there is a little difference between ‘Merge Join’ of query with JOIN (Inner Join) and the others options (Left Semi Join), and JOIN can give more results than IN or EXISTS if the relationship is 1:0..N and not 1:0..1. And if I try use NOT IN and NOT EXISTS the query plan is different from LEFT JOIN too (Left Anti Semi Join vs. Left Outer Join + Filter). So, I found a case where EXISTS has a different query plan than IN or ANY/SOME:” USE AdventureWorks GO -- use of SOME SELECT * FROM HumanResources.Employee E WHERE E.EmployeeID = SOME ( SELECT EA.EmployeeID FROM HumanResources.EmployeeAddress EA UNION ALL SELECT EA.EmployeeID FROM HumanResources.EmployeeDepartmentHistory EA ) -- use of IN SELECT * FROM HumanResources.Employee E WHERE E.EmployeeID IN ( SELECT EA.EmployeeID FROM HumanResources.EmployeeAddress EA UNION ALL SELECT EA.EmployeeID FROM HumanResources.EmployeeDepartmentHistory EA ) -- use of EXISTS SELECT * FROM HumanResources.Employee E WHERE EXISTS ( SELECT EA.EmployeeID FROM HumanResources.EmployeeAddress EA UNION ALL SELECT EA.EmployeeID FROM HumanResources.EmployeeDepartmentHistory EA ) When looked into execution plan of the queries listed above indeed we do get different plans for queries and SQL Server Engines creates the best (least cost) plan for each query. Click on image to see larger images. Thanks Paulo for your wonderful contribution. Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, Readers Contribution, SQL, SQL Authority, SQL Joins, SQL Optimization, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Web Matrix released

    - by TATWORTH
    Microsoft have now released Web Matrix (and ASP.NET MVC3 if you so inclined!) One signifcant utility is IIS Express which will replace Cassini It is worth noting that SP1 for VS2010 should be out in Q1. Links: http://www.hanselman.com/blog/ASPNETMVC3WebMatrixNuGetIISExpressAndOrchardReleasedTheMicrosoftJanuaryWebReleaseInContext.aspx http://www.hanselman.com/blog/LinkRollupNewDocumentationAndTutorialsFromWebPlatformAndTools.aspx http://arstechnica.com/microsoft/news/2011/01/microsoft-releases-free-webmatrix-web-development-tool.ars I am impressed by the copious tutorials on MVC, which I include below: Intro to ASP.NET MVC 3 onboarding series. Scott Hanselman and Rick Anderson collaboration and Mike Pope (Editor) Both C# and VB versions: Intro to ASP.NET MVC 3 Adding a Controller Adding a View Entity Framework Code-First Development Accessing your Model's Data from a Controller Adding a Create Method and Create View Adding Validation to the Model Adding a New Field to the Movie Model and Table Implementing Edit, Details and Delete Source code for this series MVC 3 Updated and new tutorials/ API Reference on MSDN Rick Anderson (Lead Programming Writer), Keith Newman and Mike Pope (Editor) ASP.NET MVC 3 Content Map ASP.NET MVC Overview MVC Framework and Application Structure Understanding MVC Application Execution Compatibility of ASP.NET Web Forms and MVC Walkthrough: Creating a Basic ASP.NET MVC Project Walkthrough: Using Forms Authentication in ASP.NET MVC Controllers and Action Methods in ASP.NET MVC Applications Using an Asynchronous Controller in ASP.NET MVC Views and UI Rendering in ASP.NET MVC Applications Rendering a Form Using HTML Helpers Passing Data in an ASP.NET MVC Application Walkthrough: Using Templated Helpers to Display Data in ASP.NET MVC Creating an ASP.NET MVC View by Calling Multiple Actions Models and Validation in ASP.NET MVC How to: Validate Model Data Using DataAnnotations Attributes Walkthrough: Using MVC View Templates How to: Implement Remote Validation in ASP.NET MVC Walkthrough: Adding AJAX Scripting Walkthrough: Organizing an Application using Areas Filtering in ASP.NET MVC Creating Custom Action Filters How to: Create a Custom Action Filter Unit Testing in ASP.NET MVC Applications Walkthrough: Using TDD with ASP.NET MVC How to: Add a Custom ASP.NET MVC Test Framework in Visual Studio ASP.NET MVC 3 Reference System.Web.Mvc System.Web.Mvc.Ajax System.Web.Mvc.Async System.Web.Mvc.Html System.Web.Mvc.Razor

    Read the article

  • EBS 12.1.1 Test Starter Kit now Available for Oracle Application Testing Suite

    - by Steven Chan
    We've discussed automated testing tools for the E-Business Suite several times on this blog, since testing is such a key part of everyone's implementation lifecycle.  An important part of our testing arsenal in E-Business Suite Development is the Oracle Application Testing Suite.  The Oracle Automated Testing Suite (OATS) is built on the foundation of the e-TEST suite of products acquired from Empirix  in 2008.  The testing suite is comprised of:   1. Oracle Load Testing for scalability, performance, and load testing   2. Oracle Functional Testing for automated functional and regression testing   3. Oracle Test Manager for test process management, test execution, and defect trackingOracle Application Testing Suite 9.0 has been supported for use with the E-Business Suite since 2009.  I'm very pleased to let you know that our E-Business Suite Release 12.1.1 Test Starter Kit is now available for Oracle Application Testing Suite 9.1.  You can download it here:Oracle Application Testing Suite Downloads

    Read the article

  • What is Linq?

    - by Aamir Hasan
    The way data can be retrieved in .NET. LINQ provides a uniform way to retrieve data from any object that implements the IEnumerable<T> interface. With LINQ, arrays, collections, relational data, and XML are all potential data sources. Why LINQ?With LINQ, you can use the same syntax to retrieve data from any data source:var query = from e in employeeswhere e.id == 1select e.nameThe middle level represents the three main parts of the LINQ project: LINQ to Objects is an API that provides methods that represent a set of standard query operators (SQOs) to retrieve data from any object whose class implements the IEnumerable<T> interface. These queries are performed against in-memory data.LINQ to ADO.NET augments SQOs to work against relational data. It is composed of three parts.LINQ to SQL (formerly DLinq) is use to query relational databases such as Microsoft SQL Server. LINQ to DataSet supports queries by using ADO.NET data sets and data tables. LINQ to Entities is a Microsoft ORM solution, allowing developers to use Entities (an ADO.NET 3.0 feature) to declaratively specify the structure of business objects and use LINQ to query them. LINQ to XML (formerly XLinq) not only augments SQOs but also includes a host of XML-specific features for XML document creation and queries. What You Need to Use LINQLINQ is a combination of extensions to .NET languages and class libraries that support them. To use it, you’ll need the following: Obviously LINQ, which is available from the new Microsoft .NET Framework 3.5 that you can download at http://go.microsoft.com/?linkid=7755937.You can speed up your application development time with LINQ using Visual Studio 2008, which offers visual tools such as LINQ to SQL designer and the Intellisense  support with LINQ’s syntax.Optionally, you can download the Visual C# 2008 Expression Edition tool at www.microsoft.com/vstudio/express/download. It is the free edition of Visual Studio 2008 and offers a lot of LINQ support such as Intellisense and LINQ to SQL designer. To use LINQ to ADO.NET, you need SQL

    Read the article

  • SQL SERVER – Delay Command in SQL Server – SQL in Sixty Seconds #055

    - by Pinal Dave
    Have you ever needed WAIT or DELAY function in SQL Server?  Well, I personally have never needed it but I see lots of people asking for the same. It seems the need of the function is when developers are working with asynchronous applications or programs. When they are working with an application where user have to wait for a while for another application to complete the processing. If you are programming language developer, it is very easy for you to make the application wait for command however, in SQL I personally have rarely used this feature.  However, I have seen lots of developers asking for this feature in SQL Server, hence I have decided to build this quick video on the same subject. We can use WAITFOR DELAY ‘timepart‘ to create a SQL Statement to wait. Let us see the same concept in following SQL in Sixty Seconds Video: Related Tips in SQL in Sixty Seconds: Delay Function – WAITFOR clause – Delay Execution of Commands What would you like to see in the next SQL in Sixty Seconds video? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Interview Questions and Answers, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology, Video Tagged: Identity

    Read the article

  • Parallelism in .NET – Part 11, Divide and Conquer via Parallel.Invoke

    - by Reed
    Many algorithms are easily written to work via recursion.  For example, most data-oriented tasks where a tree of data must be processed are much more easily handled by starting at the root, and recursively “walking” the tree.  Some algorithms work this way on flat data structures, such as arrays, as well.  This is a form of divide and conquer: an algorithm design which is based around breaking up a set of work recursively, “dividing” the total work in each recursive step, and “conquering” the work when the remaining work is small enough to be solved easily. Recursive algorithms, especially ones based on a form of divide and conquer, are often a very good candidate for parallelization. This is apparent from a common sense standpoint.  Since we’re dividing up the total work in the algorithm, we have an obvious, built-in partitioning scheme.  Once partitioned, the data can be worked upon independently, so there is good, clean isolation of data. Implementing this type of algorithm is fairly simple.  The Parallel class in .NET 4 includes a method suited for this type of operation: Parallel.Invoke.  This method works by taking any number of delegates defined as an Action, and operating them all in parallel.  The method returns when every delegate has completed: Parallel.Invoke( () => { Console.WriteLine("Action 1 executing in thread {0}", Thread.CurrentThread.ManagedThreadId); }, () => { Console.WriteLine("Action 2 executing in thread {0}", Thread.CurrentThread.ManagedThreadId); }, () => { Console.WriteLine("Action 3 executing in thread {0}", Thread.CurrentThread.ManagedThreadId); } ); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Running this simple example demonstrates the ease of using this method.  For example, on my system, I get three separate thread IDs when running the above code.  By allowing any number of delegates to be executed directly, concurrently, the Parallel.Invoke method provides us an easy way to parallelize any algorithm based on divide and conquer.  We can divide our work in each step, and execute each task in parallel, recursively. For example, suppose we wanted to implement our own quicksort routine.  The quicksort algorithm can be designed based on divide and conquer.  In each iteration, we pick a pivot point, and use that to partition the total array.  We swap the elements around the pivot, then recursively sort the lists on each side of the pivot.  For example, let’s look at this simple, sequential implementation of quicksort: public static void QuickSort<T>(T[] array) where T : IComparable<T> { QuickSortInternal(array, 0, array.Length - 1); } private static void QuickSortInternal<T>(T[] array, int left, int right) where T : IComparable<T> { if (left >= right) { return; } SwapElements(array, left, (left + right) / 2); int last = left; for (int current = left + 1; current <= right; ++current) { if (array[current].CompareTo(array[left]) < 0) { ++last; SwapElements(array, last, current); } } SwapElements(array, left, last); QuickSortInternal(array, left, last - 1); QuickSortInternal(array, last + 1, right); } static void SwapElements<T>(T[] array, int i, int j) { T temp = array[i]; array[i] = array[j]; array[j] = temp; } Here, we implement the quicksort algorithm in a very common, divide and conquer approach.  Running this against the built-in Array.Sort routine shows that we get the exact same answers (although the framework’s sort routine is slightly faster).  On my system, for example, I can use framework’s sort to sort ten million random doubles in about 7.3s, and this implementation takes about 9.3s on average. Looking at this routine, though, there is a clear opportunity to parallelize.  At the end of QuickSortInternal, we recursively call into QuickSortInternal with each partition of the array after the pivot is chosen.  This can be rewritten to use Parallel.Invoke by simply changing it to: // Code above is unchanged... SwapElements(array, left, last); Parallel.Invoke( () => QuickSortInternal(array, left, last - 1), () => QuickSortInternal(array, last + 1, right) ); } This routine will now run in parallel.  When executing, we now see the CPU usage across all cores spike while it executes.  However, there is a significant problem here – by parallelizing this routine, we took it from an execution time of 9.3s to an execution time of approximately 14 seconds!  We’re using more resources as seen in the CPU usage, but the overall result is a dramatic slowdown in overall processing time. This occurs because parallelization adds overhead.  Each time we split this array, we spawn two new tasks to parallelize this algorithm!  This is far, far too many tasks for our cores to operate upon at a single time.  In effect, we’re “over-parallelizing” this routine.  This is a common problem when working with divide and conquer algorithms, and leads to an important observation: When parallelizing a recursive routine, take special care not to add more tasks than necessary to fully utilize your system. This can be done with a few different approaches, in this case.  Typically, the way to handle this is to stop parallelizing the routine at a certain point, and revert back to the serial approach.  Since the first few recursions will all still be parallelized, our “deeper” recursive tasks will be running in parallel, and can take full advantage of the machine.  This also dramatically reduces the overhead added by parallelizing, since we’re only adding overhead for the first few recursive calls.  There are two basic approaches we can take here.  The first approach would be to look at the total work size, and if it’s smaller than a specific threshold, revert to our serial implementation.  In this case, we could just check right-left, and if it’s under a threshold, call the methods directly instead of using Parallel.Invoke. The second approach is to track how “deep” in the “tree” we are currently at, and if we are below some number of levels, stop parallelizing.  This approach is a more general-purpose approach, since it works on routines which parse trees as well as routines working off of a single array, but may not work as well if a poor partitioning strategy is chosen or the tree is not balanced evenly. This can be written very easily.  If we pass a maxDepth parameter into our internal routine, we can restrict the amount of times we parallelize by changing the recursive call to: // Code above is unchanged... SwapElements(array, left, last); if (maxDepth < 1) { QuickSortInternal(array, left, last - 1, maxDepth); QuickSortInternal(array, last + 1, right, maxDepth); } else { --maxDepth; Parallel.Invoke( () => QuickSortInternal(array, left, last - 1, maxDepth), () => QuickSortInternal(array, last + 1, right, maxDepth)); } We no longer allow this to parallelize indefinitely – only to a specific depth, at which time we revert to a serial implementation.  By starting the routine with a maxDepth equal to Environment.ProcessorCount, we can restrict the total amount of parallel operations significantly, but still provide adequate work for each processing core. With this final change, my timings are much better.  On average, I get the following timings: Framework via Array.Sort: 7.3 seconds Serial Quicksort Implementation: 9.3 seconds Naive Parallel Implementation: 14 seconds Parallel Implementation Restricting Depth: 4.7 seconds Finally, we are now faster than the framework’s Array.Sort implementation.

    Read the article

< Previous Page | 213 214 215 216 217 218 219 220 221 222 223 224  | Next Page >