Search Results

Search found 21028 results on 842 pages for 'single player'.

Page 484/842 | < Previous Page | 480 481 482 483 484 485 486 487 488 489 490 491  | Next Page >

  • How do you enable multi-core virtualization in Windows 8 Pro?

    - by Greg B
    I've just got a new Dell Vostro 470 with a quad core (8 threads) i7 3770 and I'm trying to run virtual machines on it, which works fine, except if I want to assign multiple cores to a VM. I've checked the bios which states Intel Virtualization Technology [Enabled], but both Hyper-V and VirtualBox will only allow me to assign a single core. If I run the Intel Processor Identification Utility on the host OS it tells me that Intel Virtualization Technology isn't supported by the processor, but according to the Intel website, it is. So whats going on? Have Dell clipped the i7's wings? Is there some config in Windows I need to change?

    Read the article

  • What is recommended minimum object size for gzip benefits?

    - by utt73
    I'm working on improving page speed display times, and one of the methods is to gzip content from the webserver. Google recommends: Note that gzipping is only beneficial for larger resources. Due to the overhead and latency of compression and decompression, you should only gzip files above a certain size threshold; we recommend a minimum range between 150 and 1000 bytes. Gzipping files below 150 bytes can actually make them larger. We serve our content through Akamai, using their network for a proxy and CDN. What they've told me: Following up on your question regarding what is the minimum size Akamai will compress the requested object when sending it to the end user: The minimum size is 860 bytes. My reply: What is the reason(s) for why Akamai's minimum size is 860 bytes? And why, for example, is this not the case for files Akamai serves for facebook? (see below) Google recommends to gzip more agressively. And that seems appropriate on our site where the most frequent hits, by far, are AJAX calls that are <860 bytes. Akamai's response: The reasons 860 bytes is the minimum size for compression is twofold: (1) The overhead of compressing an object under 860 bytes outweighs performance gain. (2) Objects under 860 bytes can be transmitted via a single packet anyway, so there isn't a compelling reason to compress them. So I'm here for some fact checking. Is the 860 byte limit due to packet size the end of this reasoning? Why would high traffic sites push this lower/closer to the 150 byte limit... just to save on bandwidth costs, or is there a performance gain in doing so?

    Read the article

  • How can I make an actual compact cassette "tape" mix-tape from iTunes?

    - by MikeN
    I want to make an actual compact cassette mix-tape as a gift for someone. I use iTunes to manage all of my music. So a few questions: If I gather a bunch of songs on a playlist for sides A/B of the tape, how can I ensure that the volume for all songs is the same as it plays on the tape? I was thinking of finding an old compact casette recorder and putting the single line sterio output of my Mac to the casette's microphone input. Is that a good way to record onto the actual tape? How long is each side of a compact tape? Is there a default speed the tape plays at? Let's say I want to mesh some songs together so that they will completely fill up one side of a tape (cut off 10 seconds off the end of one song or the beginning of another song), what's the best way to do that?

    Read the article

  • Restrictive routing best practices for Google App Engine with python?

    - by Aleksandr Makov
    Say I have a simple structure: app = webapp2.WSGIApplication([ (r'/', 'pages.login'), (r'/profile', 'pages.profile'), (r'/dashboard', 'pages.dash'), ], debug=True) Basically all pages require authentication except for the login. If visitor tries to reach a restrictive page and he isn't authorized (or lacks privileges) then he gets redirected to the login view. The question is about the routing design. Should I check the auth and ACL privs in each of the modules (pages.profile and pages.dash from example above), or just pass all requests through the single routing mechanism: app = webapp2.WSGIApplication([ (r'/', 'pages.login'), (r'/.+', 'router') ], debug=True) I'm still quite new to the GAE, but my app requires authentication as well as ACL. I'm aware that there's login directive on the server config level, but I don't know how it works and how I can tight it with my ACL logic and what's worse I cannot estimate time needed to get it running. Besides, it looks only to provide only 2 user groups: admin and user. In any case, that's the configuration I use: handlers: - url: /favicon.ico static_files: static/favicon.ico upload: static/favicon.ico - url: /static/* static_dir: static - url: .* script: main.app secure: always Or I miss something here and ACL can be set in the config file? Thanks.

    Read the article

  • How to backup a remote VPS machine?

    - by morpheous
    I am considering opting for a VPS solution, with the server running Ubuntu server. I am pretty new to this, and I need to come up with a backup policy for my server data. Initial data is likely to be about 80Mb, and I expect the data to grow at approximately 5Mb to 10 Mb a day. Can anyone recommend: A backup/restore policy (best practises for a small startup) Which tools to use for backup? Another thing that is not clear to me is - where are the files backed up to normally (in the case of remote servers). If the files are backed up to the same machine (or even to another machine but with the same host), there is potentially, a single point of failure). How do people normally backup their server data, and is the probability of machine meltdown or the host company server farm "catching fire" so remote as not to be worth worrying about - especially for a small (read one man) startup like me?

    Read the article

  • Book: DevOps for Developers

    - by Tori Wieldt
    We all know development and operations often act like silos, with "Just throw it over the wall!" being the battle cry. Many organizations unwittingly contribute to gaps between teams, with management by (competing) objectives; a clash of Agile practices vs. more conservative approaches; and teams using different sets of tools, such as Nginx, OpenEJB, and Windows on developers' machines and Apache, Glassfish, and Linux on production machines. At best, you've got sub-optimal collaboration, at worst, you've got the Hatfields and the McCoys.  The book DevOps for Developers helps bridge the gap between development and operations by aligning incentives and sharing approaches for processes and tools. It introduces DevOps as a modern way of bringing development and operations together. It also means to broaden the usage of Agile practices to operations to foster collaboration and streamline the entire software delivery process in a holistic way. Some single aspects of DevOps may not be new, for example, you may have used the tool Puppet for years already, but with a new mindset ("my job is not just to code, it's to serve the customer in the best way possible") and a complete set of recipes, you'll be well on your way to success. DevOps for Developers also by provides real-world use cases (e.g., how to use Kanban or how to release software). It provides a way to be successful in the real development/operations world. DevOps for Developers is written my Michael Hutterman, Java Champion, and founder of the Cologne Java User Group. "With DevOps for Developers, developers can learn to apply patterns to improve collaboration between development and operations as well as recipes for processes and tools to streamline the delivery process," Hutterman explains.

    Read the article

  • Can't find netbooted for Kerrighed pxe boot with Ubuntu Lucid Server

    - by Pengin
    I'm following installation guides for pxe booting and kerrighed. I can't find the package nfsbooted for Ubuntu 10.04. Where did it go? Context: At work I have access to 8 mini-ITX PCs and am trying to build a cluster. My plans include trying Condor, GridGain, Hadoop, and recently Kerrighed has caught my eye. (I reaslise these are all for different kinds of things, I'm just evaluating). Ideally, I'd like to have all the nodes network boot from a single server, since that seems so much easier to manage, plus I can 'borrow' additional PCs for a while without touching their HD. I've been getting on great with Ubuntu Lucid Server (10.04), trying to follow the only guides I can find to get pxe booting (and ultimately kerrighed) to work. This guide is for Ubuntu 8.04 and this one is for Debian. They both refer to a package I can't seem to find, nfsbooted. Has this package been replaced? Am I doing something daft?

    Read the article

  • 12.04, nvidia-settings makes one of my dual monitors grey and useless, disables network

    - by Kerrick
    I'm running Ubuntu 12.04 64-bit, Precise Pangolin, with a PNY GTS 250 1GB video card and a monitor plugged into each of the DVI ports. I'm using the proprietary drivers (post-release updates). If I set anything to do with Separate X Screens up in nvidia-settings (and write it to xorg.conf and reboot), my second monitor has a grey background, no menu bar, no ability to have a window on it, the second monitor doesn't get picked up in a screneshot, and if I move my mouse cursor to it it's an ugly black X. Plus, my network is unable to connect to anything. If I subsequently delete /etc/X11/xorg.conf and reboot, everything goes back to working, albeit with a single monitor activated. If I set anything to do with TwinView up in nvidia-settings, my second monitor starts working, but it isn't seen as a second monitor by Ubuntu, so I can't apply color calibration to it separately. Plus, my mouse gets "caught" between the monitors every time I try to move my cursor between the two. What gives? If it helps, this is the xorg.conf that nvidia-settings generates for Separate X Screens.

    Read the article

  • TCO Comparison: Oracle Exadata vs IBM P-Series

    - by Javier Puerta
    Cost Comparison for Business Decision-makersOracle Exadata Database Machine vs. IBM Power SystemsHow to Weigh a Purchase DecisionOctober 2012 Download full report here In this research-based  white paper conducted at the request of Oracle, The FactPoint Group compares the cost of ownership of the Oracle Exadata engineered system to a traditional build-your-own (BYO) solution, in this case an IBM Power 770 (P770) with SAN storage.  The IBM P770 was chosen given it is IBM’s current most popular model, based on FactPoint primary and secondary research and IBM claims, and because at least one of the interviewed customers had specifically migrated from a P770 to Exadata, affording us a more specific data point for comparison. This research found that Oracle Exadata: Can be deployed more quickly and easily requiring 59% fewer man-hours than a traditional IBM Power Systems solution. Delivers dramatically higher performance typically up to 12X improvement, as described by customers, over their prior solution.  Requires 40% fewer systems administrator hours to maintain and operate annually, including quicker support calls because of less finger-pointing and faster service with a single vendor.  Will become even easier to operate over time as users become more proficient and organize around the benefits of integrated infrastructure. Supplies a highly available, highly scalable and robust solution that results in reserve capacity that make Exadata easier for IT to operate because IT administrators can manage proactively, not reactively.  Overall, Exadata operations and maintenance keep IT administrators from “living on the edge.”  And it’s pre-engineered for long-term growth. Finally, compared to IBM Power Systems hardware, Exadata is a bargain from a total cost of ownership perspective:  Over three years, the IBM hardware running Oracle Database cost 31% more in TCO than Exadata.

    Read the article

  • How/where to find all Update (msu) packages for Windows 7 (Enterprise)

    - by Rudi
    Hi, I've been using Wim2Vhd to create native boot vhd files. (I'm using this to keep several development environments ready, I'm a developer, I know -- I need help ;-) Now the first boot always ends up in several minutes installing all of the windows updates... I' know I could avoid this if I had a massive list of qfe (.msu) packages that I could supply to the command line of Wim2Vhd. Where can I find all of the updates so I can download them. I have found this: http://www.megaleecher.net/Downloading_Microsoft_Windows_7_Offline_Updates but I'd like a more official source and I'd like an easier way to download them all. I'm a developer, so if I find the information (web service?) to get all the current packages, I'll build a tool to download them to a single location and generate a list of them for use with Wim2Vhd.

    Read the article

  • faster ( squid + apache httpd + apache tomcat )

    - by letronje
    We have a production setup where we have Squid in the front(caching images, js, css, etc) Apache httpd in the middle(prefork + mod_rewrite + mod_jk/AJP + mod_deflate + mod_php(few php pages)) Apache tomcat 5.5 at the end serving all the dynamic stuff. What would be the best way to reduce the overhead of having 3 servers in the request path ? Wondering if replacing httpd with a faster web server like nginx/lighttpd will help. httpd right now does the job of url rewriting(for clean urls) and talking to tomcat(via mod_jk) and compressing output(mod_deflate) and serving some low traffic php pages. What would be ideal replacement for httpd given that we need these features? Is there a way to replace (squid + apache) with a single entity that does caching well (like squid) for static stuff, rewrites url, compresses response and forwards dynamic stuff directly to tomcat ? heard abt varnish cache, wondering if it can help.

    Read the article

  • Mono is frequently used to say "Yes, .NET is cross-platform". How valid is that claim?

    - by Thorbjørn Ravn Andersen
    In What would you choose for your project between .NET and Java at this point in time ? I say that I would consider the "Will you always deploy to Windows?" the single most important decision to make up front in a new web project, and if the answer is "no", I would recommend Java instead of .NET. A very common counter-argument is that "If we ever want to run on Linux/OS X/Whatever, we'll just run Mono", which is a very compelling argument on the surface, but I don't agree for several reasons. OpenJDK and all the vendor supplied JVM's have passed the official Sun TCK ensuring things work correctly. I am not aware of Mono passing a Microsoft TCK. Mono trails the .NET releases. What .NET-level is currently fully supported? Does all GUI elements (WinForms?) work correctly in Mono? Businesses may not want to depend on Open Source frameworks as the official plan B. I am aware that with the new governance of Java by Oracle, the future is unsafe, but e.g. IBM provides JDK's for many platforms, including Linux. They are just not open sourced. So, under which circumstances is Mono a valid business strategy for .NET-applications?

    Read the article

  • Business Accelerators for Oracle BI Applications

    - by Mike.Hallett(at)Oracle-BI&EPM
    "Using the Oracle Business Accelerators across projects …, our deliveries are now 50% faster than traditional implementations.” "Oracle Application Implementation is now within reach for Small to Medium Businesses who want a faster, cheaper & better … implementation.   Oracle Business Accelerators (OBA) has made it possible.” Find out more @ OBA for BI Applications   {partner single sign-on required} Oracle Business Accelerator (OBA) kits are now available for BI Applications, configured for both: ·        E-Business Suite, and ·        JD Edwards E1. This has resources to help partners to sell and deliver tightly scoped OBI-Application implementation projects with low risk and high margin in a few weeks; with huge potential to up-sell many add on services to your delighted client. For example, select “Oracle BI Applications ” & “E-Business Suite - R12.1.1”, and then the “OBA Project Consultant” gets the following self-guided tutorials and access to a comprehensive delivery kit:

    Read the article

  • What's the fastest and automatic way to transfer 2GB of data between 2 PCs every night?

    - by phan
    While it's fast (less than 2 minutes) I hate having to copy files from PC #1 onto a USB stick, and then manually popping it in PC #2 to copy the files to PC #2. Dropbox is too slow in uploading and then downloading 2GBs (synching), it could take hours. Copying 2GBs over the network is also slow because we're dealing with 10,000 little files that totals 2GBs, and not just one, giant 2gb file. Not sure why, but dealing with 10,000 little files makes the copy process much longer. Is there any other method that I'm missing? Any ideas? I'm using Win7 on both PCs. Edit: These files change every single night.

    Read the article

  • Google Talk and Video outside of GMail

    - by mankoff
    I'd like to use Google Talk/Video with having the full gmail or igoogle interface displayed. The ideal setup would be the lightweight popout interface (link below) in a small Fluid.app single instance browser as a stand-alone desktop app. If I log into GMail, the chat sidebar has a phone icon so I can use Google Voice, and a camera icon next to me and some of my contacts. If I log into iGoogle, the chat sidebar has a camera next to me and some contacts, but no phone. I would like to have video chat (and perhaps the phone option) elsewhere. Google provides a chat talkgadget popout URL: http://talkgadget.google.com/talkgadget/popout but there is no phone or camera icon accessible.

    Read the article

  • Solaris 11 Customer Maintenance Lifecycle

    - by user12244672
    Hi Folks, Welcome to my new blog, http://blogs.oracle.com/Solaris11Life , which is all about the Customer Maintenance Lifecycle for Image Packaging System (IPS) based Solaris releases, such as Solaris 11. It'll include policies, best practices, clarifications, and lots of other stuff which I hope you'll find useful as you get up to speed with Solaris 11 and IPS.   Let's start with a version of my Solaris 11 Customer Maintenance Lifecycle presentation which I gave at this year's Oracle Open World and at the recent Deutsche Oracle Anwendergruppe (DOAG - German Oracle Users Group) conference in Nürnberg. Some of you may be familiar with my Patch Corner blog, http://blogs.oracle.com/patch , which fulfilled a similar purpose for System V [five] Release 4 (SVR4) based Solaris releases, such as Solaris 10 and below. Since maintaining a Solaris 11 system is quite different to maintaining a Solaris 10 system, I thought it prudent to start this 2nd parallel blog for Solaris 11. Actually, I have an ulterior motive for starting this separate blog.  Since IPS is a single tier packaging architecture, it doesn't have any patches, only package updates.  I've therefore banned the word "patch" in Solaris 11 and introduced a swear box to which my colleagues must contribute a quarter [$0.25] every time they use the word "patch" in a public forum.  From their Oracle Open World presentations, John Fowler owes 50 cents, Liane Preza owes $1.25, and Bart Smaalders owes 75 cents.  Since I'm stinging my colleagues in what could be a lucrative enterprise, I couldn't very well discuss IPS best practices on a blog called "Patch Corner" with a URI of http://blogs.oracle.com/patch.  I simply couldn't afford all those contributions to the "patch" swear box. :) Feel free to let me know what topics you'd like covered - just post a comment in the comment box on the blog. Best Wishes, Gerry.

    Read the article

  • Server 2008R2 in Extra Small Windows Azure Instance?

    - by Shawn Eary
    Windows Azure hosting for an Extra Small (XS) Windows VM seems to come out to be about $10 a month right now. I think this XS instance gives you the equivalent of a 1 GHZ CPU with 768MB of RAM. I think the minimum requirements for Server 2008 is 1GHZ CPU with 512MB of RAM. Also, I think the minimum requirements for SQL Server Express is 1GHZ CPU with 256 MB of RAM and that the minimum requirements for Team Foundation Server Express 11 Beta is 2.2 GHZ CPU with 1 Gig of RAM (this 2.2 GHZ part could be a problem for my 1 GHZ XS VM...). Given the performance of the XS Azure instance, would I be able to install: a very basic MVC web site; a free instance of SQL Server Express; a free single user instance of Team Foundation Server Express 11 Beta and run the XS VM instance without serious crashing? I know there are other Shared WebHost providers that can provide these features for me, but those hosting providers have the following disadvantages: They sometimes cost a lot of money after all of the "addons" are in place They probably don't provide the level of security and employee integrity that Microsoft can provide They don't provide the total control that an Azure VM seems to provide

    Read the article

  • Using excel, how can I count the number of cells in a column containing the text "true" or "false"?

    - by Jay Elston
    I have a spreadsheet that has a column of cells where each cell contains a single word. I would like to count the occurrences of some words. I can use the COUNTIF function for most words, but if the word is "true" or "false", I get 0. A B 1 apples 2 2 true 0 3 false 0 4 oranges 1 5 apples In the above spreadsheet table, I have these formulas in cells B1, B2, B3 and B4: =COUNTIF(A1:A5,"apples") =COUNTIF(A1:A5,"true") =COUNTIF(A1:A5,"false") =COUNTIF(A1:A5,"oranges) As you can see, I can count apples, but not true or false. I have also tried this: =COUNTIF(A1:A5,TRUE) But that does not work either. Note -- I am using Excel 2007.

    Read the article

  • Can't get any SMART or temperature data from HDDs

    - by Regs
    I have a PC with recently installed Gigabyte GA-X79-UD5 MB. I've encountered some weird problem with getting SMART data or temperature for HDDs. Every single tool I've tried in Windows 7 just can't get any data (HDTune, AIDA64...). I was suspecting that SMART feature is disabled in BIOS but it's seems like there is no such option in BIOS settings. I've even tried to update BIOS but still no luck. Same issue with both controllers on that MB (Intel and Marvell). It seems unlikely that both controllers end up with exact same issue. Both controllers are working in AHCI mode. Is there anythig that can interfere with getting SMART ant temp data from HDDs? Or is there any way to check that it's actuall MB issue? Is it even possible that it is hardware issue since all HDDs seems to work normal despite the fact that I can't get any temperature or SMART data from it.

    Read the article

  • How do you KISS?

    - by Conor
    The KISS principle is a highly quoted design mantra. The aim of this principle is to stamp out unnecessary complexity on a project. This is a good thing, saving time, energy and money. It can lead to a relatively stress free implementation and a simple, elegant, maintainable end product. A lot of discussion on KISS provides mechanisms to simplify requirements, design and implementation. Things that spring to mind include: avoid scope creep; simple obvious design and code; minimal run-time dependencies; refactoring to maintain simplicity; etc. However there are a lot of implicit things that we do to KISS. Examples: small team sizes; minimal management layers; tidy desk; mastery of a single IDE; clear concise error messages; scripts to automate/encapsulate tasks; etc Why KISS practices do you apply? How have they been of benefit? I'm especially interested in non-obvious practices.

    Read the article

  • New SQLOS features in SQL Server 2012

    - by SQLOS Team
    Here's a quick summary of SQLOS feature enhancements going into SQL Server 2012. Most of these are already in the CTP3 pre-release, except for the Resource Governor enhancements which will be in the release candidate. We've blogged about a couple of these items before. I plan to add detail. Let me know which ones you'd like to see more on: - Memory Manager Redesign: Predictable sizing and governing SQL memory consumption: sp_configure ‘max server memory’ now limits all memory committed by SQL ServerResource Governor governs all SQL memory consumption (other than special cases like buffer pool) Improved scalability of complex queries and operations that make >8K allocations Improved CPU and NUMA locality for memory accesses Single memory manager that handles page allocations of all sizes Consistent Out-of-memory handling & management across different internal components - Optimized Memory Broker for Column Store indexes (Project Apollo) - Resource Governor Support larger scale multi-tenancy by increasing Max. number of resource pools20 -> 64 [for 64-bit] Enable predictable chargeback and isolation by adding a hard cap on CPU usage Enable vertical isolation of machine resources Resource pools can be affinitized to individual or groups of schedulers or to NUMA nodes New DMV for resource pool affinity  - CLR 4 support, adds .NET Framework 4 advantages - sp_server_dianostics Captures diagnostic data and health information about SQL Server to detect potential failures Analyze internal system state Reliable when nothing else is working   - New SQLOS DMVs (in 2008 R2SP1) SQL Server related configuration - New DMVsys.dm_server_services OS related resource configurationNew DMVssys.dm_os_volume_statssys.dm_os_windows_infosys.dm_server_registry XEvents for SQL and OS related Perfmon counters Extend sys.dm_os_sys_info See previous blog posts here and here. - Scale / Mission critical Increased scalability: Support Windows 8 max memory and logical processorsDynamic Memory support in Standard Edition - Hot-Add Memory enabled when virtualized - Various Tier1 Performance Improvements, including reduced instructions for superlatches. Originally posted at http://blogs.msdn.com/b/sqlosteam/

    Read the article

  • Does shutdown idle VMs improve the performance?

    - by Samselvaprabu
    Often our team members are coming to me with a compliant that their VMs are slow. Our team members suggested to shutdown some of the VMs temporarily and try to access the VM. But most cases that would not help. Assume that i have assigned 4 GB for and 2 CPUs for my VM. So ideally it should not face performance issue. As our ESXi 4.1 server has multiple VM in the same server (we have overcommited memory and CPU). Does shut down other VM really helps to improve performance or not? [Note : We are using ESXi 4.1 and our hardware is R710 server. We have more number of VMs in single server so we have overcommited memory.]

    Read the article

  • How can I enable anonymous access to a Samba share under ADS security mode?

    - by hemp
    I'm trying to enable anonymous access to a single service in my Samba config. Authorized user access is working perfectly, but when I attempt a no-password connection, I get this message: Anonymous login successful Domain=[...] OS=[Unix] Server=[Samba 3.3.8-0.51.el5] tree connect failed: NT_STATUS_LOGON_FAILURE The message log shows this error: ... smbd[21262]: [2010/05/24 21:26:39, 0] smbd/service.c:make_connection_snum(1004) ... smbd[21262]: Can't become connected user! The smb.conf is configured thusly: [global] security = ads obey pam restrictions = Yes winbind enum users = Yes winbind enum groups = Yes winbind use default domain = true valid users = "@domain admins", "@domain users" guest account = nobody map to guest = Bad User [evilshare] path = /evil/share guest ok = yes read only = No browseable = No Given that I have 'map to guest = Bad User' and 'guest ok' specified, I don't understand why it is trying to "become connected user". Should it not be trying to "become guest user"?

    Read the article

  • multiple domains, one static IP address and latency

    - by shirish
    how is latency affected when multiple domains are using one single static IP address ? The scenario is in shared web-hosting By latency meaning the DNS lookup the client has to do. As far as I understand it, the browser would hit the root servers to try to figure out the IP Address and it belongs where and then when it comes to the correct server, it probably looks up some sort of table to determine which site names much and show that site as such via browser to the user. Is my understanding correct or backwards or what ?

    Read the article

  • pure-ftpd: one readonly/non-deletable file in home directory

    - by Bram Schoenmakers
    Is there a way to have a file in the user's FTP home directory without the ability to modify/remove it from that directory over FTP? So the user has write permissions on his own home folder, thus the ability to remove files. An exception should be made for a single file, which has the same filename and contents for each account. The solution I'm thinking of right now to run a periodic script to check the presence of that file, and if not, put it back. But I wonder whether there's a better solution than this.

    Read the article

< Previous Page | 480 481 482 483 484 485 486 487 488 489 490 491  | Next Page >