Search Results

Search found 21331 results on 854 pages for 'require once'.

Page 395/854 | < Previous Page | 391 392 393 394 395 396 397 398 399 400 401 402  | Next Page >

  • Windows Update can't install Windows Vista SP1

    - by Harry Johnston
    If you install Windows Vista RTM and run Windows Update, many updates are offered and will successfully install. Once all other updates are installed, Windows Vista service pack 1 is offered. When you attempt to install Windows Vista service pack 1, the service pack installation wizard appears, presenting the license agreement and so on. However, shortly after the installation starts the wizard disappears. Windows Update says that the update was installed successfully. However, service pack 1 is not in fact installed, and will be detected as needed again on the next update check. Repeat ad nauseum. On checking the Windows Update log, error 0x80190194 appears near the beginning of an update check, associated with the URL http://update.microsoft.com/vista/windowsupdate/redir/vistawuredir.cab. Why won't service pack 1 install properly and how do I fix it?

    Read the article

  • iOS: Using Jenkins for nightly internal builds (TestFlight), plus frequent client builds

    - by Amy Worrall
    I'm an iOS dev, working for a small agency. I'm currently on a few smaller projects where I'm the only developer. We recently acquired a Jenkins server, but each project is left to fend for themselves as for how to use it. I'd like to use it for making and distributing builds. My ideal would be: Every commit is built as a single IPA that is placed in a HTTP-accessible location. (It only needs to keep the latest one, otherwise the disk would fill up — some of our apps are 500MB or more.) Once a day it makes a build, signs it with our internal provisioning profile, tacks a build number onto the end of the version number, and sends it to our internal TestFlight team. When manually prompted, it builds the latest commit, gives it a manually specified version number, signs it with the client's provisioning profile and sends it to TestFlight. I'm pretty new to Jenkins. The developer who set up the server is running it on one of our projects, so I know it has the right stuff installed to do Xcode builds. I believe he's only using it to run unit tests though, not to do any of the code signing, IPA creation or TestFlight stuff. So my questions: I've listed three distinct kinds of build. How does Jenkins cope with that? I see there's a "build triggers" section in the config for a Jenkins project, but it doesn't seem to mention different types of build. Should I just set up multiple Jenkins projects, called "App X (continuous)", "App X (nightly)" and "App X (client)"? How do I specify the provisioning profile through Jenkins? If there isn't a way, I guess I could make different configurations in the Xcode project… Has anyone else used Jenkins to actually do the release (i.e. build and push to TestFlight) of beta builds of their apps? Is it a good idea? Or should I continue just doing it manually?

    Read the article

  • Checkpoint FW-1disk full

    - by krugger
    Hi, The logging filled the disk of the managment node. This cause the firewall nodes to start logging locally. After deleting some old logs, restarting the managment node logging is still being done locally at the firewall nodes. I have already done a fw fetchlogs on all firewall nodes to get the local log entries. How can I tell the firewall nodes that they should once again connect to the managment node? My syslog_servers show doesn't show any syslog servers. But the documentation mentions that this is for additional logging and not for logging between firewall nodes and the managment node.

    Read the article

  • Tuning Red Gate: #5 of Multiple

    - by Grant Fritchey
    In the Tuning Red Gate series I've shown you how to look at a current load on the system and how to drill down to look at historical analysis of the system. I've also shown how you can see the top queries and other information from the current status of the system. I have one more thing I can show you before we need to start fixing things and showing how that affects the data collected, historical moments in time. For example, back in Post #3 I was looking at some spikes in some of the monitored resources that were taking place a couple of weeks back in time. Once I identify a moment in time that I'm interested in, I can go back to the first page of Monitor, Global Overview, and click on the icon: From this you can select the date and time you're interested in. For example, I saw some serious CPU queues last week: This then rolls back the time for all the information that's available to the Global Overview and the drill down to the server and the SQL Server instance there. This then allows me to look at the Top Queries running at this point, sort them by CPU and identify what was potentially the query that was causing the problem right when I saw the CPU queuing This ability to correlate a moment in time with the information available to you in the Analysis window makes for an excellent tool to investigate your systems going backwards in time. It really makes a huge difference in your knowledge. It's not enough to know that something happened at a particular time. You need to know what it was that was occurring. Remember, the key to tuning your systems is having enough knowledge about them. I'll post more on Tuning Red Gate as soon as I can get some queries rewritten. I'm working on that.

    Read the article

  • Load and Web Performance Testing using Visual Studio Ultimate 2010-Part 2

    - by Tarun Arora
    Welcome back, in part 1 of Load and Web Performance Testing using Visual Studio 2010 I talked about why Performance Testing the application is important, the test tools available in Visual Studio Ultimate 2010 and various test rig topologies. In this blog post I’ll get into the details of web performance & load tests as well as why it’s important to follow a goal based pattern while performance testing your application. Tools => Options => Test Tools Have you visited the treasures of Visual Studio Menu bar tools => Options => Test Tools lately? The options to enable disable prompts on creating, editing, deleting or running manual/automated tests can be controller from here. The default test project language and default test types created on a new test project creation could be selected/unselected from here. Ever wondered how you can change the default limit of 25 test results, this can again be changed from here. If you record a lot of Web Tests and wish for the web test recorder to start with “that” URL populated, well this again can be specified from here. If you haven’t so far, I would urge you to spend 2 minutes in the test tools options.   Test Menu => Ready Steady Test Action! The Test tools are under the Test Menu in Visual Studio, apart from being able to create a new Test and Test List you can also load an existing vsmdi file. You can also manage your test controllers from here. A solution can have one or more test setting files, but there can only be one active test settings file at any time. Again, this selection can be done from here.  You can open the various test windows from under the windows option from the test menu. If you open the Test view window you will see that you have the option to group the tests by work items, project, test type, etc. You can set these properties by right clicking a test in the test list and choosing properties from the context menu.    So, what is a vsmdi file? vsmdi stands for Visual Studio Test Metadata File. Placed under the Solution Items this file keeps track of the list of unit tests in your solution. If you open the vsmdi file as an xml file you will see a series of Test Links nested with in the list Test List tags along with the Run Configuration tag. When in visual studio you run tests, the IDE looks at the vsmdi file to see what tests need to be run. You also have the option of using the vsmdi file in your team builds to specify which tests need to run as part of the build. Refer here for a walkthrough from a fellow blogger on how to use the vsmdi file in the team builds. Web Performance Test – The Truth! In Visual Studio 2010 “Web Tests” have been renamed to “Web Performance Tests”. Apart from renaming this test type there have been several improvements to this test type in visual studio 2010. I am very active on the MSDN Visual Studio And Load Testing forum and a frequent question from many users is “Do Web Tests support Pages that run JavaScript?” I will start with a little bit of background before answering this question. Web Performance Tests operate at the HTTP Layer, but why? To enable you to generate high loads with a relatively low amount of hardware, Web performance tests are driven at the protocol layer rather than instantiating a browser.The most common source of confusion is that users do not realize Web Performance Tests work at the HTTP layer. The tool adds to that misconception. After all, you record in IE, and when running a Web test you can select which browser to use, and then the result viewer shows the results in a browser window. So that means the tests run through the browser, right? NO! The Web test engine works at the HTTP layer, and does not instantiate a browser. What does that mean? In the diagram below, you can see there are no browsers running when the engine is sending and receiving requests. Does that mean I can’t test pages that use Java script? The best example for java script generating HTTP traffic is AJAX calls. The most common example of browser plugins are Silverlight or Flash. The Web test recorder will record HTTP traffic from AJAX calls and from most (but not all) browser plugins. This means you will still be able to web performance test pages that use java script or plugin and play back the results but the playback engine will not show the java script or plug in results in the ‘browser control’. If you want to test the page behaviour as a result of the java script or plug in consider using Coded UI Tests. This page looks like it failed, when in fact it succeeded! Looking closely at the response, and subsequent requests, it is clear the operation succeeded. As stated above, the reason why the browser control is pasting this message is because java script has been disabled in this control. So, to reiterate, the web performance test recorder: - Sends and receives data at the HTTP layer. - Does NOT run a browser. - Does NOT run java script. - Does NOT host ActiveX controls or plugins. There is a great series of blog posts from Ed Glas, i would highly recommend his blog to any one performing Load/Performance testing through Visual Studio. Demo – Web Performance Test [Demo] - Visual Studio Ultimate 2010: Test Settings and Configuration   [Demo]–Visual Studio Ultimate 2010: Web Performance Test   In this short video I try and answer the following questions, Why is performance Testing important? How does Visual Studio Help you performance Test your applications? How do i record a web performance test? How do make a web performance test data driven, transaction driven, loop driven, convert to code, add validations? Best practices for recording Web Performance Tests. I have a web performance test, what next? Creating the Web Performance Test was the first step towards load testing your application. Now that we have the base test we can test the page behaviour when N-users access the page. Have you ever had the head of business call you and mention that the marketing team has done a fantastic job and are expecting increased traffic on the web site, can the website survive the weekend with that additional load? This is the perfect opportunity to capacity test your application to see how your website holds up under various levels of load, you can work the results backwards to see how much hardware you may need to scale up your application to survive the weekend. Apart from that it is always a good idea to have some benchmarks around how the application performs under light loads for short duration, under heavy load for long duration and soak test the application run a constant load for a very week or two to record the effects of constant load for really long durations, this is a great way of identifying how your application handles the default IIS application pool reset which by default is configured to once every 25 hours. These bench marks will act as the perfect yard stick to measure performance gains when you start making improvements. BUT there are some best practices! => Goal Based Load Testing Approach Since the subject is vast and there are a lot of things to measure and analyse, … it is very easy to get distracted from the real goal!  You can optimize your application once you know where the pain points are. There is no point performing a load test of 5000 users if your intranet application will only have a 100 simultaneous users, it is important to keep focussed on the real goals of the project. So the idea is to have a user story around your load testing scenarios and test realistically. So it is recommended that you follow the below outline, It is an Iterative process, refine your objectives, identify the key scenarios, what is the expected workload, key metrics you want to report, record the web performance tests, simulate load and analyse results. Is your application already deployed in Production? This is great! You can analyse the IIS Logs to understand the user behaviour… But what are IIS LOGS? The IIS logs allow you to record events for each application and Web site on the Web server. You can create separate logs for each of your applications and Web sites. Logging information in IIS goes beyond the scope of the event logging or performance monitoring features provided by Windows. The IIS logs can include information, such as who has visited your site, what the visitor viewed, and when the information was last viewed. You can use the IIS logs to identify any attempts to gain unauthorized access to your Web server. How to configure IIS LOGS? For those Ninjas who already have IIS Logs configured (by the way its on by default) and need a way to analyse the IIS Logs, can use the Windows IIS Utility – Log Parser. Log Parser is a very powerful tool that provides a generic SQL-like language on top of many types of data like IIS Logs, Event Viewer entries, XML files, CSV files, File System and others; and it allows you to export the result of the queries to many output formats such as CSV, XML, SQL Server, Charts and others; and it works well with IIS 5, 6, 7 and 7.5. Frequently used Log Parser queries. Demo – Load Test [Demo]–Visual Studio Ultimate 2010: Load Testing   In this short video I try and answer the following questions, - Types of Performance Testing? - Perform Goal driven Load Testing, analyse Test Run Result and Generate a report? Recap A quick recap of what we have covered so far,     Thank you for taking the time out and reading this blog post, in part III of this blog series I’ll be getting into the details of Test Result Analysis, Test Result Drill through, Test Report Generation, Test Run Comparison, and the Asp.net Profiler. If you enjoyed the post, remember to subscribe to http://feeds.feedburner.com/TarunArora. Questions/Feedback/Suggestions, etc please leave a comment. See you on in Part III   Share this post : CodeProject

    Read the article

  • Anonymous Access and Sharepoint Web Services

    - by Stacy Vicknair
    A month or so ago I was working on a feature for a project that required a level of anonymity on the Sharepoint site in order to function. At the same time I was also working on another feature that required access to the Sharepoint search.asmx web service. I found out, the hard way, that the Sharepoint Web Services do not operate in an expected way while the IIS site is under anonymous access. Even though these web services expect requests with certain permissions (in theory) they never attempt to request those credentials when the web service is contacted. As a result the services return a 401 Unauthorized response. The fix for my situation was to restrict anonymous access to the area that needed it (in this case the control in question had support for being used in an ASP.NET app that I could throw in a virtual directory). After that I removed anonymous access from IIS for the site itself and the QueryService requests were working once more. Here’s a related article with a bit more depth about a similar experience: http://chrisdomino.com/Blog/Post/401-Reasons-Why-SharePoint-Web-Services-Don-t-Work-Anonymously?Length=4 Technorati Tags: Sharepoint,QueryService,WSS,IIS,Anonymous Access

    Read the article

  • C# 4.0 in a Nutshell, Fourth Edition

    - by outcoldman
    Just became a lucky owner of this book C# IN A NUTSHELL 4th edition. This is a fourth edition of this book’s series. I saw previous third edition of this book, we presented it on one of our events at Yaroslavl State University, but that book was a Russian translated version and published in Russia, this is was bad side of that book – all books at Russia printed on really bad paper. I should say that I didn’t read this book by end, but already I was surprised. Why? Why I heard a lot about Richter CLR via C# (English version of 3rd edition of this book I already have, and this book are waiting my attention), and just a few words about C# IN A NUTSHELL, at least in my sphere. I just listen once about this book at one of the podcast of Alt.Net group, and this words was Richter it is really good book, and C# IN A NUTSHELL it is a good handbook. My opinion is - you should read Richter if you want to develop with .NET. But if you want to develop on .NET with C# you should read C# IN A NUTSHELL too. Read more...

    Read the article

  • Clean up infected computer from viruses

    - by ripper234
    I have a computer which had AVG Free installed from day one. After several months of operation, it starts detecting viruses and trojans all the time. Besides running a full scan, what should I do to clean the computer? Should I install another anti-virus or anti-malware tool (can it help?), or once viruses infect a system the only real solution is a clean format? (Lately I've heard of viruses that burn themselves in the BIOS, so a clean format might not always work ... how common is this technique? Should I burn a fresh BIOS as well?)

    Read the article

  • Anatomy of a serialization killer

    - by Brian Donahue
    As I had mentioned last month, I have been working on a project to create an easy-to-use managed debugger. It's still an internal tool that we use at Red Gate as part of product support to analyze application errors on customer's computers, and as such, should be easy to use and not require installation. Since the project has got rather large and important, I had decided to use SmartAssembly to protect all of my hard work. This was trivial for the most part, but the loading and saving of results was broken by SA after using the obfuscation, rendering the loading and saving of XML results basically useless, although the merging and error reporting was an absolute godsend and definitely worth the price of admission. (Well, I get my Red Gate licenses for free, but you know what I mean!)My initial reaction was to simply exclude the serializable results class and all of its' members from obfuscation, and that was just dandy, but a few weeks on I decided to look into exactly why serialization had broken and change the code to work with SA so I could write any new code to be compatible with SmartAssembly and save me some additional testing and changes to the SA project.In simple terms, SA does all that it can to prevent serialization problems, for instance, it will not obfuscate public members of a DLL and it will exclude any types with the Serializable attribute from obfuscation. This prevents public members and properties from being made private and having the name changed. If the serialization is done inside the executable, however, public members have the access changed to private and are renamed. That was my first problem, because my types were in the executable assembly and implemented ISerializable, but did not have the Serializable attribute set on them!public class RedFlagResults : ISerializable        {        }The second problem caused by the pruning feature. Although RedFlagResults had public members, they were not truly properties, and used the GetObjectData() method of ISerializable to serialize the members. For that reason, SA could not exclude these members from pruning and further broke the serialization. public class RedFlagResults : ISerializable        {                public List<RedFlag.Exception> Exceptions;                 #region ISerializable Members                 public void GetObjectData(SerializationInfo info, StreamingContext context)                {                                info.AddValue("Exceptions", Exceptions);                }                 #endregionSo to fix this, it was necessary to make Exceptions a proper property by implementing get and set on it. Also, I added the Serializable attribute so that I don't have to exclude the class from obfuscation in the SA project any more. The DoNotPrune attribute means I do not need to exclude the class from pruning.[Serializable, SmartAssembly.Attributes.DoNotPrune]        public class RedFlagResults        {                public List<RedFlag.Exception> Exceptions {get;set;}        }Similarly, the Exception class gets the Serializable and DoNotPrune attributes applied so all of its' properties are excluded from obfuscation.Now my project has some protection from prying eyes by scrambling up the code so it's harder to reverse-engineer, without breaking anything. SmartAssembly has also provided the benefit of merging so that the end-user doesn't need to extract all of the DLL files needed by RedFlag into a directory, and can be run directly from the .zip archive. When an error occurs (hey, I'm only human!), an exception report can be sent to me so I can see what went wrong without having to, er, debug the debugger.

    Read the article

  • Growing Into Enterprise Architecture

    - by pat.shepherd
    I am writing this post as I am in an Enterprise Architecture class, specifically on the Oracle Enterprise Architecture Framework (OEAF).  I have been a long believer that SOA’s key strength is that it is the first IT approach that blends or unifies business and technology.  That is a common view and is certainly valid but is not completely true (or at least accurate).  As my personal view of EA is growing, I realize more than ever that doing EA is FAR MORE than creating a reference architecture, creating a physical architecture or picking a technology to standardize on.  Those are parts of the puzzle but not the whole puzzle by any stretch. I am now a firm believer that the various EA frameworks out there provide the rigor and structure required to allow the bridging of business strategy / vision to IT strategy / vision. The flow goes something like this: Business Strategy –> Business / Application / Information / Technology Architecture –> SOA Reference Architecture –> SOA Functional Architecture.  Governance is imbued throughout to help map, measure and verify the business-to-IT coherence. With those in place, then (and only then) can SOA fulfill it’s potential to be more that an integration strategy, more than a reuse strategy; but also a foundation for tying the results of IT to business vision. Fortunately, EA is a an ongoing process that it is never too late to get started with an understanding of frameworks such as TOGAF, FEA, or OEAF.  Also, EA is never ending in that it always needs to be apply, even once a full-blown Enterprise Architecture is established it needs to be constantly evolved.  For those who are getting deeper into EA as a discipline, there is plenty runway to grow as your company/customer begins to look more seriously at EA. I will close with a pointer to a Great Book I have recently read on this subject: Enterprise Architecture as Strategy (http://www.amazon.com/Enterprise-Architecture-Strategy-Foundation-Execution/dp/1591398398/ref=sr_1_1?ie=UTF8&s=books&qid=1268842865&sr=1-1)

    Read the article

  • SQL SERVER – A Puzzle – Fun with SEQUENCE in SQL Server 2012 – Guess the Next Value

    - by pinaldave
    Yesterday my friend Vinod Kumar wrote excellent blog post on SQL Server 2012: Using SEQUENCE. I personally enjoyed reading the content on this subject. While I was reading the blog post, I thought of very simple new puzzle. Let us see if we can try to solve it and learn a bit more about Sequence. Here is the script, which I executed. USE TempDB GO -- Create sequence CREATE SEQUENCE dbo.SequenceID AS BIGINT START WITH 3 INCREMENT BY 1 MINVALUE 1 MAXVALUE 5 CYCLE NO CACHE; GO -- Following will return 3 SELECT next value FOR dbo.SequenceID; -- Following will return 4 SELECT next value FOR dbo.SequenceID; -- Following will return 5 SELECT next value FOR dbo.SequenceID; -- Following will return which number SELECT next value FOR dbo.SequenceID; -- Clean up DROP SEQUENCE dbo.SequenceID; GO Above script gave me following resultset. 3 is the starting value and 5 is the maximum value. Once Sequence reaches to maximum value what happens? and WHY? Bonus question: If you use UNION between 2 SELECT statement which uses UNION, it also throws an error. What is the reason behind it? Can you attempt to answer this question without running this code in SQL Server 2012. I am very confident that irrespective of SQL Server version you are running you will have great learning. I will follow up of the answer in comments below. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • What is the proper way to Windows 7/Ubuntu 10.10 Dual-Triple Boot Partitioning for Laptop OEM?

    - by Denja
    Hi Linux Community, I find my self struggling with the slowness of windows OS once again. It's Time to change with the Ubuntu 10.10 64bit for I like to use a faster Operating System. My Hard Disk laptop has a RECOVERY and HP_TOOLS partition they are both Primary. I Have the System Recovery DVD for Windows 64bit should anything bad happen. Here's the layout I used with windows before: * (C:) Windows 7 system partition NTFS - 284,89GB (Primary,ad Boot,Pagefile,Dump) * HP_TOOLS system partition FAT32 - 99MB (Primary) * (D:) RECOVERY partition NTFS - 12,90GB (Primary) * SYSTEM partition NTFS 199MB (Primary) Here's the layout I wanted to make: * (C:) Windows 7 system partition NTFS - 60GB (Primary) (sda1) * (D:) Windows DATA partition (user files) NTFS - 120GB(Primary)(sda2);wanna share with Linux * Linux root Ext4 - 10GB (Extended)(sda3) (Ubuntu 10.10 64bit) * Linux home Ext3 - 90GB (Extended)(sda4) (Ubuntu 10.10 64bit) * Linux swap swap- RAM size, 3GB (sda5) * Linux root Ext3- 18GB (Extended) (sda6) (OpenSuse or Puppy or kubuntu) Here is my New Ubuntu 10.10 64bit layout in use now: * SYSTEM partition NTFS 199MB (Primary) (sda1) * (C:) Windows 7 system partition NTFS - 90GB (Primary) (sda2) * (D:) Windows 7 RECOVERY partition NTFS - 12,90GB (Primary) (sda3) * Linux system partition EXTENDED - 195,1GB (Logical) * Linux root Ext4- 10GB (Extended) (sda4) * Linux swap swap- RAMx2 size, 6,1GB (sda5) * Linux home Ext3- 179GB (Extended) (sda6) When I installed Ubuntu,I didn't know if I could wipe all previous partitions,because of the RECOVERY partition. So I just made the space for my extended partition with GParted by deleting the HP_TOOLS (Fat32). By doing this I managed somehow to install Ubuntu 64 with Success. And I also made the partitions for the swap or a third Linux OS as Jordan suggested. But I couldn't actually make the partitions for the shared NTFS.(no option!) Question 1: What is the proper way to Windows 7/Ubuntu 10.10 Dual-Triple Boot Partitioning for Laptop OEM?? Thank you in advance for your advises and suggestions and Happy New Year to All!!

    Read the article

  • Why doesn't update-manager allow me to upgrade distribution?

    - by spoulson
    I have an Ubuntu 9.04 PC behind a corporate firewall and proxy server. This requires that in order to get update-manager to fetch and apply updates, I must set the proxy and authentication settings in the Synaptic network configuration. Once done, I can check for updates and things work smoothly (except I don't get popup notifications of new updates, must manually check periodically). However, distribution updates just don't show up in update-manager, such as the newly released 9.10 Karmic Koala. I had the same issue in upgrading 8.10 to 9.04 and solved it by downloading and upgrading from the 9.04 ISO. What do I need to do to upgrade to 9.10 using the standard update-manager UI?

    Read the article

  • Podcast Show Notes: The Red Room Interview &ndash; Part 2

    - by Bob Rhubart
    Room bloggers Sean Boiling, Richard Ward, and Mervin Chaing bring their in-the-trenches perspective to the conversation once again in this week’s edition of the OTN ArchBeat Podcast. Listen. (Missed last week? No problemo: Listen to Part 1) In this segment the conversation turns to SOA governance and balancing the need for reuse against the need for speed.  It’s no mystery that many people react to the term “SOA Governance” in much the same way as they would to the sound of Darth Vader’s respirator. But Mervin explains how a simple change in terminology can go a long way toward lowering blood pressure. Those interested in connecting with Sean, Richard, or Mervin can do so via the links listed below: Sean Boiling - Sales Consulting Manager for Oracle Fusion Middleware LinkedIn | Twitter | Blog Richard Ward - SOA Channel Development Manager at Oracle LinkedIn | Blog Mervin Chiang - Consulting Principal at Leonardo Consulting LinkedIn | Twitter | Blog And you’ll find the complete list of the Red Room SOA Best Practice Posts in last week’s show notes. The third and final segment of the Red Room series runs next week.  I have enough material from the original interview for a fourth program,  but it’ll have to wait. Also, as mentioned last week, the podcast name change is now complete, from Arch2Arch, to ArchBeat. As WPBH-TV9 weatherman Phil Connors says, “Anything different is good.”   Technorati Tags: archbeat,podcast. arch2arch,soa,soa governance,oracle,otn Flickr Tags: archbeat,podcast. arch2arch,soa,soa governance,oracle,otn

    Read the article

  • Is RAID 0 or JBOD better for home media server?

    - by Donald Hughes
    I have an external two-bay drive enclosure (the OWC Mercury Elite-AL Pro) connected to a Mac Mini (my home media server) over FireWire 800. I'm streaming media to other computers in the house over wired gigabit. I have two 1.5 TB drives that I'm using independently right now. The media is on one, and I'm mirroring the files to the other drive at night as a backup. But as I approach filling up the drive I'm wanting to span those two drives together to give me a total of about 3 TB, and then buy another drive for backups. The external enclosure supports both RAID 0 and JBOD, but I'm not clear on which would be better in this situation. Would RAID 0 provide any performance improvements over JBOD for streaming video (possibly several streams at once? How does each affect the MTBF of the drives? In general, should I choose RAID 0, JBOD, or keep them independent?

    Read the article

  • Friday Fun: Play MineSweeper in Google Chrome

    - by Asian Angel
    Are you addicted to MineSweeper and love to play it when taking a break from work? Now you can add that mine sweeping goodness to Google Chrome with the Chrome MineSweeper extension. Find Those Mines! Once the extension has been installed simply click on the “Toolbar Button” to access the game (opens in a new tab). The “emoticons” at the top of the tab window indicate the difficulty level of game play available. Sometimes you can make quick progress in a short time with this game… Only to lose moments later. So you do have to plan your strategy out carefully. You will be surprised (or perhaps alarmed?) at just how quickly you get addicted to playing “just one more round”! Want a bigger challenge? Click on the “middle emoticon” to access a tougher level. The ultimate level…how much mine sweeping punishment are you up for?   Conclusion If you are a MineSweeper fan then this will be a perfect addition to your browser. For those who are new to this game then you have a lot of fun just waiting for you. Links Download the Chrome MineSweeper extension (Google Chrome Extensions) Similar Articles Productive Geek Tips How to Make Google Chrome Your Default BrowserPlay a New Random Game Each Day in ChromeEnable Vista Black Style Theme for Google Chrome in XPIncrease Google Chrome’s Omnibox Popup Suggestion Count With an Undocumented SwitchFriday Fun: Play 3D Rally Racing in Google Chrome TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional YoWindoW, a real time weather screensaver Optimize your computer the Microsoft way Stormpulse provides slick, real time weather data Geek Parents – Did you try Parental Controls in Windows 7? Change DNS servers on the fly with DNS Jumper Live PDF Searches PDF Files and Ebooks

    Read the article

  • Search predictions broken in Chrome - reinstall didn't solve the problem

    - by Shimmy
    I recently changed the default search engine to a custom google search URL (using baseUrl) with some additional parameters and removed all the rest of the search engines, and since then, the search predictions stopped working. I even tried to reinstall Chrome but as soon as I resync, the problem is back! Search predictions are just gone without option to fix!! In IE changing the search provider allows specifying a prediction (suggestion) provider, In chrome, once you change the default search engine, you'll never be able to have predictions again!! This is a terrible bug, I mean WTF!!! Is there any workaround to that? I posted a bug report a while ago but it seems no one looks at it. I'm about to give up on Chrome and go back to IE, the only good thing about Chrome is the Extension market and the AdBlocker (which I can find in IE as well). The perfrormance changes don't matter to me too much. Thanks

    Read the article

  • Setting up a Windows Server 2008 R2 DC + Fileserver : native or virtual?

    - by user126890
    I want to deploy a new DC + Fileserver using Windows Server 2008 R2 SP1 Standard Edition on a Dell PowerEdge R410 and iSCSI storage for a small business (~30 people). Should I install the system native on the server or use a virt layer? I don't have a budget for virtualization so i gotta go with something free... What's a better working routine, taking snapshots of vm's or taking backups (Acronis/CloneZilla) of systems? If I use a virt system, I need a GUI for some people in the business to reset the system to a earlier state in emergency situations. I wanted to install phpVirtualBox once but never finished, is it suitable in a productive environment? server specs: Intel Xeon E5620 CPU (2,40GHz, 4C, 12MB Cache) 8GB RAM Dual Rank LV RDIMMs 1333MHz 2x 1TB SATA 7,2K 3,5, RAID1

    Read the article

  • Opinions on NTFS for Mac solution?

    - by AngryHacker
    I am currently using the free NTFS-3G to access my NTFS drive from the Mac. It seems pretty stable (except once in the very beginning, it locked up the Mac and corrupted my NTFS drive, which I then fixed with chkdsk from a PC). However, speed is NOT one of its virtues. In fact, it's painfully slow I've been looking at buying Paragon NTFS for Mac OS X 8.0. Their product comparison PDF claims nearly double the speed (vs NTFS-3G) in almost every category (read, write, etc...). In addition, there is now an unsupported native solution with the snow leopard. Can folks here share their experiences. Is the native solution stable enough to be used for daily work? Should I just go with Paragon?

    Read the article

  • Thoughts on my new template language?

    - by Ralph
    Let's start with an example: using "html5" using "extratags" html { head { title "Ordering Notice" jsinclude "jquery.js" } body { h1 "Ordering Notice" p "Dear @name," p "Thanks for placing your order with @company. It's scheduled to ship on {@ship_date|dateformat}." p "Here are the items you've ordered:" table { tr { th "name" th "price" } for(@item in @item_list) { tr { td @item.name td @item.price } } } if(@ordered_warranty) p "Your warranty information will be included in the packaging." p(class="footer") { "Sincerely," br @company } } } The "using" keyword indicates which tags to use. "html5" might include all the html5 standard tags, but your tags names wouldn't have to be based on their HTML counter-parts at all if you didn't want to. The "extratags" library for example might add an extra tag, called "jsinclude" which gets replaced with something like <script type="text/javascript" src="@content"></script> Tags can be optionally be followed by an opening brace. They will automatically be closed as the closing brace. If no brace is used, they will be closed after taking on element. Variables are prefixed with the @ symbol. They may be used inside double-quoted strings. I think I'll use single-quotes to indicate "no variable substitution" like PHP does. Filter functions can be applied to variables like @variable|filter. Arguments can be passed to the filter @variable|filter:@arg1,arg2="y" Attributes can be passed to tags by including them in (), like p(class="classname"). Some questions: Which symbol should I use to prefix variables? @ (like Razor), $ (like PHP), or something else? Should the @ symbol be necessary in "for" and "if" statements? It's kind of implied that those are variables. Tags and controls (like if,for) presently have the exact same syntax. Should I do something to differentiate the two? If so, what? Do you like the attribute syntax? (round brackets) I'll add more questions in a few minutes, once I get some feedback.

    Read the article

  • Watch IPL League 2010 Online On YouTube

    - by Suganya
    Having said that IPL League match for the year 2010 starts tomorrow, Many of us would be interested in watching the broadcast live online. The first match between Deccan chargers and Kolkata Knight Riders starts tomorrow at 20.00 IST. IPL T20 lasts for 45 days starting from March 12th 2010 till April 25th 2010. The entire league takes place in India. The opening ceremony takes place on 12th continued by the official game between Deccan Chargers and Kolkata Knight Riders Most of us would not be able to watch the match on Television. So this year IPL joins hand with Google to make it available live on NET. How to Watch IPL Match Live Online Google and IPL has joined their hands to make the match available online for all the viewers around the world. To watch the IPL Match live online log on here How to Watch IPL Match Live On Mobile IPL and GCV(Global Cricket Ventures) are tied to July Systems. You can watch IPL matches live on mobile by accessing the link directly using a GPRS enabled mobile or viewers can simply call from their mobile phone to MiX (Mobile Internet Experience) using the Toll Free number 08 123 123 123. Once you call this Toll Free number a link will be sent to your mobile. You can access that link to view the live cricket on your mobile. Watch IPL League Cricket Match Live Online Watch IPL League Cricket Match Live On Mobile Join us on Facebook to read all our stories right inside your Facebook news feed.

    Read the article

  • The visual effects in Windows 7 turns off after reboot.

    - by Jagannath
    I have Windows 7 64-bit Professional on my PC. When using English language, the visual effects are turned on. When I use Telugu LIP (India), the visual effects are turned off every time I reboot my PC. When I say visual effects, I don't mean the aero effects. The maximize and minimize effects are not felt. The effects don't show up though the check boxes are checked. Find the image showing the check boxes are checked but the effects not being showed up p. UPDATE: I recreated the account and now the effects are restored properly. UPDATE: Yes it is. This is a standard account. The surprising thing is,the check boxes are checked but still no animations. Once I make the apply button enable, the animations show up again.

    Read the article

  • OpenFilesView Displays All Open and Locked Files to Help Resolve In-Use Errors

    - by Jason Fitzpatrick
    Windows: You go to move a file and Windows throws up an “In Use” error. OpenFilesView shows you what application or system process is locking up the files you’re trying to move. Sometimes the culprit is obvious; if you go to move your media folder and you’ve got your media player open watching South Park then shutting down the media player is the obvious solution. Other times the culprit is less obvious; sometimes Windows processes and less-than-obvious applications are accessing your files in ways that aren’t apparent. The screenshot below showcases the “In Use” error: This is where OpenFilesView comes into play. Fire up the application to see a list of all active files on your system. The master list is a bit overwhelming (on our test system there were over 1200 open files) but you use the find command to drill down to specific file or folder names. Once you’ve found the locked file you can close the file handle, kill the process, or bring the process to the front (so you can examine the program, if possible, before terminating it). It’s much more efficient than rebooting in an attempt to shake the In-Use error. OpenFilesView is freeware and works on Windows XP through Windows 7. HTG Explains: Do You Really Need to Defrag Your PC? Use Amazon’s Barcode Scanner to Easily Buy Anything from Your Phone How To Migrate Windows 7 to a Solid State Drive

    Read the article

  • Windows Azure Virtual Machine Test Drive Kit

    - by Clint Edmonson
    The public preview of hosted Virtual Machines in Windows Azure is now available to the general public. This platform preview enables you to evaluate our new IaaS and Enterprise Networking capabilities. Once you have registered for the 90 Day Free Trial and created a new account, you can access the preview directly at this link: https://account.windowsazure.com/PreviewFeatures If you’ve been to any of my presentations lately, you’ll know that I’m fired up about these new offerings. As I’ve worked through some scenarios for myself and with my customers, I’ve been collecting the resources that helped me to ramp up. Here’s a collection of links to the items I’ve found most useful: Core Resources Digital Chalk Talk Videos – detailed technical overviews of the new Windows Azure services and supporting technologies as announced June 7, including Virtual Machines (IaaS Windows and Linux), Storage, Command Line Tools http://www.meetwindowsazure.com/DigitalChalkTalks Scenarios Videos on You Tube – “how to” guides, including “Create and Manage Virtual Networks”, “Create & Manage SQL Database”, and many more http://www.youtube.com/user/windowsazure Windows Azure Trust Center - provides a comprehensive of view of Windows Azure and security and compliance practices http://www.windowsazure.com/en-us/support/trust-center/ MSDN Forums for Windows Azure http://www.windowsazure.com/en-us/support/preview-support/ Microsoft Knowledge Base article Microsoft server software support for Windows Azure Virtual Machines Videos Deep Dive into Running Virtual Machines on Windows Azure Windows Azure Virtual Machines and Virtual Networks Windows Azure IaaS and How It Works Deep Dive into Windows Azure Virtual Machines: From the Cloud Vendor and Enterprise Perspective An Overview of Managing Applications, Services, and Virtual Machines in Windows Azure Monitoring and Managing Your Windows Azure Applications and Services Overview of Windows Azure Networking Features Hybrid Will Rule: Options to Connect, Extend and Integrate Applications in Your Data Center and Windows Azure Business Continuity in the Windows Azure Cloud Linux on Windows Azure Blogs Understanding Windows Azure Virtual Machines An Overview of Windows Azure Virtual Network Virtual Machines and Windows Running SQL Server in a Windows Azure Virtual Machine Support for Linux Virtual Machines on Windows Azure

    Read the article

  • How do you find all the links to disavow for a Google reconsideration request? [duplicate]

    - by QF_Developer
    This question already has an answer here: How to identify spammy domains giving backlinks to my site (to submit in disavow links in WMT) 2 answers A few months ago I received the following notification on Google Webmaster for a website I look after. Unnatural links to your site—impacts links Google has detected a pattern of unnatural artificial, deceptive, or manipulative links pointing to pages on this site. Some links may be outside of the webmaster’s control, so for this incident we are taking targeted action on the unnatural links instead of on the site’s ranking as a whole. Learn more. The question here is, should we actively attempt to disavow these links given that the action is seemingly targeted to just a bunch of keywords? I've downloaded the inbound links sample from Google Webmaster and so far I've been through the disavow and reconsideration requests process 6 times, each taking 2-3 weeks only to be supplied just 2 more links that Google don't approve of. At this rate it will take me the rest of my natural life to cleanup all these spammy links! It seems disavowing is futile as they haven't implemented broad actions against the website as a whole and (from what I can gather) have already nullified the value of those offending links. Under the quoted statement above however is a reconsideration request button that seems to imply I should be actively doing something here? UPDATE 14th October -- I have since created a small .NET application that you can feed the CSV sample links file into from Google Webmaster. What this tool does is crawl all the links and looks for specific linking patterns as per some configurable match strings. I realised that many of the links that Google are taking issue with were created by a rogue SEO firm we hired several years ago. All the links are appended with 1 of 5 different descriptions. The application I built uses some regexes to isolate any link sources with these matching appendages and automatically builds the disavow txt file. In the end it had to come down to an algorithm as manually disavowing links on this scale would take weeks! I will post the app here once I've cleaned it up.

    Read the article

< Previous Page | 391 392 393 394 395 396 397 398 399 400 401 402  | Next Page >