Search Results

Search found 21028 results on 842 pages for 'single player'.

Page 509/842 | < Previous Page | 505 506 507 508 509 510 511 512 513 514 515 516  | Next Page >

  • IIS not using available memory?

    - by Herb Caudill
    Recently launched an ASP.NET site running on a single 32-bit WS2003 box (SQL on a separate server). The server has 4GB intalled, 3GB available. According to task manager, the w3wp.exe process is only using between 200-600MB. The site has tens of thousands of pages and makes heavy use of page output caching, so I would expect it to use a lot more of the available memory. The app pool isn't set to throttle memory usage. Is there anything else that might be limiting the amount of memory that IIS takes?

    Read the article

  • Use Ultracopier / Teracopy with Ditto Clipboard Manager

    - by drrtyrokka
    I really like the tool Ultracopier because of the copy acceleration and additional details on Windows. Also the ability to pause the copy is really helpful. Additionally I want to use a clipboard manager such as Ditto. The problem here is, when I run Ditto, my system won't use the Ultracopy tools for copiying anything It uses the windows default. How can I use the advantages of both tools? Is there any similar tool (perhaps a single tool) which gives me these options?

    Read the article

  • Joining H264 *without* re-encoding

    - by jdmuys
    I have two halves of a single show in two .MP4 files, encoded in H264. I would like to join them without re-encoding. Is this possible? I managed to create a joined video as a Quicktime file (.mov) using Quicktime Pro, but then Quicktime Pro will not convert it back to .MP4 without re-encoding. This may be because looking inside the .mov file, the two H264 videos are in there still separated as individual "objects". I am also struggling with MPEG StreamClip without reaching a real solution. But I may have missed something. Note that I do not have the same issue with MPEG2 files. I can export them to a .MPEG container or a .TS file for example, and then I can join them without re-encoding using MPEG Streamclip. Any suggestion welcome, preferably using Mac software.

    Read the article

  • Is there a schematic overview of Ubuntu's architecture?

    - by joebuntu
    Hi there, as enthusiastic, advanced Linux learner, I'd love to get an overview about Linux' architecure/structure in general. You know, like "the big picture". I'm thinking of a large schematic graphic showing what is what, who is who, what system (e.g. X) comprises which subsystems (GDM/Gnome/Compiz) on the way from a to z, from boot to interactive desktop, including the most important background services (auth, network, cron, ...). Maybe a bit like this: http://www.flickr.com/photos/pgc/140859386/ but way more detailed. There's bootchart, which produces very comprehensive charts, but they again are too detailed and difficult to get the "big picture" from. Is there such a thing? Possibly not for the whole System, but maybe for single subsystems? I had trouble searching for this, because using search terms like "scheme" or "architecture" pointed to the wrong direction (a tool called "scheme" or CAD software for linux). I appreciate any links. If there's interest in those schematic overviews and links, maybe someone could turn this post into a wiki post? Cheers, joebuntu

    Read the article

  • nVidia 9800 GTX+ X11 fails to initialize. no unity or lightdm

    - by rlemon
    I have just upgraded my work pc to 12.04 (not upgrade, fresh install), installing updates during the install, and after everything has loaded (with no errors) and I restart I get brought to console 1 tty login. Console 7 looks like this: IIRC I did not have to finagle with my drivers on 11.10 to get this card working. If this is in fact a driver bug I will remove this post and submit the bug but i'm not 100% confident that it is. I attempted to run unity --reset and got this: Lastly I tried $ sudo apt-get install nvidia-current which tells me nvidia-current is already the newest version. so I ran $ sudo dpkg-reconfigure nvidia-current which says /usr/sbin/dpkg-reconfigure: nvidia-current is broken or not fully installed. Anything I can try from here would be awesome. Currently the only way to get the system up and running was to shut down, plug one of my monitors into the onboard video, enable the onboard video card from the BIOS, then boot back up (and on my single monitor everything is fine). update So I have been able to boot fresh with the ext card plugged in as long as I don't take the updates with the install. past this if I only install the nvidia drivers (nvidia-current or nvidia-current-updates) from the main server (or canadian) I then get the problems.. My proposal; which I don't know where to look for: Can I try installing the previous version of this driver? In the past, on another machine I had issues with my NIC driver being funky... downgraded to the previous driver and bam everything was merry and well.

    Read the article

  • How to execute a command on multiple hosts using IPv6 only?

    - by math
    First of all there is pdsh which is essentially a parallel distributed shell which may execute commands on a list of given hosts. However, I find myself in an IPv6 only problem setting. It seems that pdsh is not able to use IPv6, as I am getting error messages: pdsh -w ^hostnames my_command pdsh@myhost: gethostbyname("foobar") failed I also tried to use IPv6 addresses only, which also didn't work. So how do you run a single shell script for administrative purpose (no SGE stuff, or similar) on a bunch of hosts that is IPv6 reachable only?

    Read the article

  • SharePoint 2010 MySites - Simple explanation needed!

    - by Chris W
    I've been playing around with the 2010 beta for a couple of weeks, experimenting with topology options etc. I think I've got myself totally confused as to how it works hence if there's any SP experts out there that can explain things in simple terms for me I'd appreciate it! I want to setup a farm with 3 servers providing the content & MySites. I presume that the way to do this is to load balance or DNS round robin traffic between the 3 servers. The bit where I'm confused is that My Site Settings page asks for a specific My Site Host hence all my site traffic will be pushed to a single server even though we have 3 in the farm. If this hosts fails I presume MySites will be unavailable. Is this right? How do I configure it so that access to MySites is load balanced across the 3 servers in the farm?

    Read the article

  • Advice on reconciling discordant data

    - by Justin
    Let me support my question with a quick scenario. We're writing an app for family meal planning. We'll produce daily plans with a target calorie goal and meals to achieve it for our nuclear family. Our calorie goal will be calculated for each person from their attributes (gender, age, weight, activity level). The weight attribute is the simplest example here. When Dad (the fascist nerd who is inflicting this on his family) first uses the application he throws approximate values into it for Daughter. He thinks she is 5'2" (157 cm) and 125 lbs (56kg). The next day Mom sits down to generate the menu and looks back over what the bumbling Dad did, quietly fumes that he can never recall anything about the family, and says the value is really 118 lbs! This is the first introduction of the discord. It seems, in this scenario, Mom is probably more correct that Dad. Though both are only an approximation of the actual value. The next day the dear Daughter decides to use the program and sees her weight listed. With the vanity only a teenager could muster she changes the weight to 110 lbs. Later that day the Mom returns home from a doctor's visit the Daughter needed and decides that it would be a good idea to update her Daughter's weight in the program. Hooray, another value, this time 117 lbs. Now how do you reconcile these data points? Measurement error, confidence in parties, bias, and more all confound the data. In some idealized world we'd have a weight authority of some nature providing the one and only truth. How about in our world though? And the icing on the cake is that this single data point changes over time. How have you guys solved or managed this conflict?

    Read the article

  • Mac OS X: Change $PATH from within python script

    - by Eye of Hell
    I have a bunch of python scripts. One of them installs software (subversion) that requires it's path to be added to $PATH. After it is installed, I want the next script to use the software. If I run export PATH=/opt/subversion/bin:$PATH in bash between the first and second script, all is ok. But if I add os.system( 'export PATH=/opt/subversion/bin:$PATH' ) as the last command of the first script (that installs subversion), $PATH remains unaltered after it exits. Is it any way to change $PATH from within python script so it will remain changed after the script finishes (inside single bash session, of course, I know about /etc/profile).

    Read the article

  • Set 802.1Q tagged port on VLAN1 on Dell PowerConnect switch

    - by Javier
    I'm having big troubles when adding this Dell switch to my network. Here we use several VLANs to segment traffic. All switches (3com and DLink mostly) have configured the same VLANs, most ports are 'untagged' and belong to a single VLAN, except for the ports used to join together the switches (in a star topology), these ports belong to all VLANs and use 802.1Q tags. So far, it works really well. But on this new switch (a Dell PowerConnect 5448), the settings are very different (and confusing). I have configured the same VLANs, an the uplink ports are set in 'general' mode (supposed to be fully 802.1Q compliant), I can set the VLAN membership as 'T' on these ports for all VLANs except VLAN 1. It always stay as 'U' on VLAN 1. Any ideas?

    Read the article

  • Overrideen ASPNet.config does not apply for legacyImpersonationPolicy

    - by Grumbler85
    I tried to override the <legacyImpersonationPolicy> Element, so a single application, will enable this policy (which is necessary, since this application breaks if disabled). So my Framework64/aspnet.config states: <configuration> <runtime> <legacyUnhandledExceptionPolicy enabled="false" /> <legacyImpersonationPolicy enabled="false" /> <alwaysFlowImpersonationPolicy enabled="false" /> <SymbolReadingPolicy enabled="1" /> <shadowCopyVerifyByTimestamp enabled="true"/> </runtime> <startup useLegacyV2RuntimeActivationPolicy="true" /> </configuration> And a local aspnet.config file has this change: <legacyImpersonationPolicy enabled="false" /> Procmon tells me the file is read by the w3wp.exe, but the settings will not apply. Can anyone point out a way how to correctly override the setting? *The Server has been restarted meanwhile, but still no changes.

    Read the article

  • PHP processes run one at a time, always taking 100% of one core

    - by Derek Kurth
    We have seven websites written in PHP running on a Windows 2008 server with IIS 7.5. They are all very slow right now. When I look in Task Manager, I see around 10 php-cgi.exe processes, and they are all taking 0% of the CPU, except one, which is taking 25%. It's a quad-core server, so it's taking 100% of one core. If I watch for a few seconds, the process taking 25% will go to 0%, and a different php-cgi.exe process will jump to 25%. So all the php-cgi.exe processes are just lined up, waiting on a single core, and each process uses 100% of the processor when it can. Each of the 7 sites is in its own application pool in IIS, and we're using FastCGI. The PHP version is 5.3. Any ideas? Thanks!

    Read the article

  • Catch headset pause/play keypresses in Windows

    - by akshay2000
    I have a new Ultrabook which has single audio jack for input and output instead for separate 3.5 mm jacks we used to have on older machines. The jack is probably similar to American Audio Jack specification or like the one found on Macbook Pro. I have tried to use it with the Apple, HTC, Nokia earphones which ship with most of the smartphones. Microphone on the headset works the way it should. Thing is that the headsets also come with remote controls to control volume and playback. I am sure that those key presses are sent to the Windows. I was hoping to catch those events and bind those to actual media keys so that I can control music playback. I guess this happens on Macs. I want to do the similar thing on the Windows. I'm just not sure where I can catch the events. Driver level? Application level?

    Read the article

  • In an online questionnaire, what is a best way to design a database for keeping track of users all attempts?

    - by user1990525
    We have a web app where users can take online exams. Exam admin will create a questionnaire. A questionnaire can have many Questions. Each question is a multiple choice question (MCQ). Lets say an admin creates a questionnaire with 10 questions. Users attempt those questions. Now, unlike real exams users can attempt single questionnaire multiple times. And we have to keep track of his all attempts. e.g. User_id Questionnaire_id question_id answer attempt_date attempt_no 1 1 1 a 1 June 2013 1 1 1 2 b 1 June 2013 1 1 1 1 c 2 June 2013 2 1 1 2 d 2 June 2013 2 Now it can also happen that after user has attempted same questionnare twice, admin can delete a question from same questionnaire, but users attempt history should still have reference to that so that user can see his that question in his attempt history in spite of admin deleting that question. If user now attempts this changed questionnaire he should see only 1 question. User_id Questionnaire_id question_id answer attempt_date attempt_no 1 1 1 a 3 June 2013 3 Also, after this user modified some part of question, users attempt history should show question before modification while any new attempt should show modified question. How do we manage this at the database level? My first gut feeling was that, For deletes, do not do physical delete, just make a question inactive so that history can still keep track of users attempt. For modifications, create versions for questions and each new attempt refres to latest version of each question and history keeping reference to version of question at attempt time.

    Read the article

  • Logs being flooded from Squid for having intercepted and authentication enabled together

    - by Horace
    I have done some hefty Google'ing and I can't seem to find a single solution to this issue that I cam currently experiencing. Here is a sample configuration from squid that I have: # # DIGEST Auth # auth_param digest program /usr/sbin/digest_file_auth /etc/squid/digpass auth_param digest children 8 auth_param digest realm LHPROJECTS.LAN Network Proxy auth_param digest nonce_garbage_interval 10 minutes auth_param digest nonce_max_duration 45 minutes auth_param digest nonce_max_count 100 auth_param digest nonce_strictness on # Squid normally listens to port 3128 # Squid normally listens to port 3128 http_port 192.168.10.2:3128 transparent https_port 192.168.10.2:3128 intercept http_port 192.168.10.2:3130 As noted above, I have three ports defined, 2 of them are transparent/intercept and one is a regular http port (which I use for authentication). Which works rather well in this configuration however my logs are getting flooded of this entry authentication not applicable on intercepted requests whenever a transparent connection is made. So far, I can't seem to find any documentation that would describe how to suppress these messages ?

    Read the article

  • SBS 2008 reinstall on new machine

    - by Jon
    We purchased a single license on Windows Small Business Server 2008 for a medical office, and soon after a power fault caused the server to overload, destroying the power supply and motherboard. I reinstalled on a new server, and have the domain up and running again, but under system properties it tells me I still need to activate windows, and that the product key currently entered is invalid for activation. I've been trying to figure out how to get in touch with Microsoft to explain the situation and get a new activation key issued without having to pay for another license which we can't afford, but the online documentation hasn't been particularly clear on the subject. Does anyone know how to do that, or if I do need the activation key to continue using SBS 2008? (It currently tells me I have 56 days left to activate) Any tips would be appreciated. I'm not strictly a computer guy and I'm feeling a little lost.

    Read the article

  • Starting VMs with an executable with as low overhead as possible

    - by Robert Koritnik
    Is there a solution to create a virtual machine and start it by having an executable file, that will start the machine? If possible to start as quickly as possible. Strange situation? Not at all. Read on... Real life scenario Since we can't have domain controller on a non-server OS it would be nice to have domain controller in an as thin as possible machine (possibly Samba or similar because we'd like to make it startup as quickly as possible - in a matter of a few seconds) packed in a single executable. We could then configure our non-server OS to run the executable when it starts and before user logs in. This would make it possible to login into a domain.

    Read the article

  • Backing up data (including mysqldumps) to S3

    - by seengee
    We have a web app on a number of servers and we want to add an additional layer of redundancy by backing up the key data to S3. The key data is the MySQL database and a folder containing dynamically created site assets - predominantly images. Some kind of rsync based solution would initially seem the best plan. A couple of years ago we played with S3cmd (in particular s3cmd sync) with some success but we didn't find it particularly reliable although this may have changed since. Its occurred to me though that a rsync solution might not work particularly well with a single db.sql file created with mysqldump and I assume this means the whole database getting transferred each time, with multiple databases of over 1GB this is going to add up to a lot of traffic (and $s) very quickly. With the image files I could simply just transfer files modified within the last day which would be far more simple. What approach should I look at?

    Read the article

  • Basic Puppet installation with Solaris 11.2 beta

    - by user13366125
    At the recent announcement we talked a lot about the Puppet integration. But how do you set it up? I want to show this in this blog entry. However this example i'm using is even useful in practice. Due to the extremely low overhead of zones i'm frequently seeing really large numbers of zones on a single system. Changing /etc/hosts or changing an SMF service property on 3 systems is not that hard. Doing it on a system with 500 zones is ... let say it diplomatic ... a job you give to someone you want to punish. Puppet can help in this case making of managing the configuration and to ease the distribution. You describe the changes you want to make in a file or set of file called manifest in the Puppet world and then roll them out to your servers, no matter if they are virtual or physical. A warning at first: Puppet is a really,really vast topic. This article is really basic and it doesn't goes more than just even toe's deep into the possibilities and capabilities of Puppet. It doesn't try to explain Puppet ... just how you get it up and running and do basic tests. There are many good books on Puppet. Please read one of them, and the concepts and the example will get much clearer immediately. (more)

    Read the article

  • Gave a talk at SoCal Code Camp at USC today titled “Linq to Objects A-Z”

    - by dotneteer
    I gave a talk at SoCal Code Camp on Linq to Objects. With careful categorization of Linq functions, I was able to cover the entire set of Linq functions in only 35 minutes. I was able to spend the rest time on demos. In my first demo, I show I was able to write a top 20 URL type of query using 4 lines of library code and 9 line of Linq code without tools like Log Parser. I also demonstrated that I only need to change 2 lines of code from querying a single log file to a whole directory of log files. It would be as simple to run the query against multiple servers in parallel. In my second demo, I discussed how to turn into graph depth-first-search (DFS) and breath-first-search (BFS) in the a Linq queryable problem. The class LingToGraph contains the only DFS and BFS code I ever have to write; the rest could be done the the lambda passed to the DFS or BFS calls. In future blogs, I will provide more details explanation of code. Links: Link to Powerpoint slides. Link to demos.

    Read the article

  • Change Keybindings (hardware to software)

    - by Daniel
    I ran a search for this, but the answers I saw were referring to something altogether different than what I'm asking for. So let me clarify: I'm not asking how to change key-combo shortcuts. I'm asking--how do you actually change what your computer thinks you did when you press a given key? An example of what I mean (and the reason I'm asking). I'm a Chrome user, and I use Windows alongside Ubuntu. I own a Lenovo Thinkpad T61p--it came with my scholarship package, and I would have shopped for a nice computer if I could have. The T61p has two buttons above the left and right arrow keys that relate to browser commands to go back and forth one page. This is extremely frustrating for me, as I use the arrow keys, and a single accidental keystroke will catch me going back a page, losing temporary data, and yelling at my stupid keyboard. At the same time, I'm the type of person who keeps way too many tabs open. Chrome doesn't let me refigure keyboard shortcuts, and the only way it allows you to switch between tabs are ctrl+tab and ctrl+shift+tab, and ctrl+page up/down. I was using Notepad++, and they had finally found the solution to both problems! The page back and forth keys functioned as tab back and forth keys. I went through quite some effort to learn how to change the keybindings in Windows. The page back and page forward keys are now the page up and page down keys, respectively, and if I hit control, they let me switch tabs easily, and rather pleasantly. And if I hit the keys by accident, no harm, no foul. Alas, I'm in Ubuntu now, and I need to go through the process again. And while I couldn't just find the answer online, like I did for Windows, I know Ubuntu has nice, supportive communities like this one, where, hopefully, somebody can tell me how to do either what I did in Windows, or directly make it so that my computer changes tabs when I hit those buttons (removing the ctrl button from the tab-changing command).

    Read the article

  • Enterprise Portal Issue with the Ax Demo VPCs

    - by ssmantha
    Microsoft’s Ax Demo VPC is basically configured for a static IP address 192.168.0.1, this is due to the fact that the VPC has Domain Controlller configured in it which requires a static IP. When we put this VPC on a network with a different subnet and change the IP you can observer that the site http://sharepoint and http://sharepoint/EP cease to function and show “Page Not Found” errors in the browser. This is mainly due to the DNS configuration which is not updated. Below is the screen shot of the changes that needs to be done to make the site functioning properly. Change the following entries in the Forward Lookup Zones of DNS management: These websites default, SharePoint and projectserver are all mapped to a single port in the IIS i.e. port number 80. These websites are recognised with host headers. These host headers are configured in DNS with incorrect IP address entries in DNS when you change the IP address of the VPC. Just change these values to point to the Local Loop Adapter (127.0.0.1) and change the DNS to point to this address in the TCP/IP properties as shown below: This will resolve the issue with the website rendering. Initially you may get time out errors while browsing these website. be patient and try again this would work.

    Read the article

  • Pages partially load on rapid refresh

    - by user101570
    I recently set up a VPS slice with 256MB to run a LAMP stack (Ubuntu 11.04, Apache2, Mysql, PHP5). So far I'm only running a simple Wordpress site on an IP-based virtual host I set up. The performance is excellent, but I've noticed that if I send multiple HTTP requests from the same IP in a short time period, only partial pages are rendered. Then if I wait a bit and refresh the page, the entire page loads again. I noticed this behaviour when accessing the site from two browsers from my office desktop, but it also presents itself if I quickly navigate the site from a single browser (any browser). I'm guessing this is an Apache phenomenon, as the pages are rendered correctly except under the conditions above, but perhaps I'm wrong here. Could it be my hosting company with some kind of DOS protection in place? As a relative Linux/server noob, I'd really appreciate any insight into what settings in Apache could explain this behaviour, and how I might go about changing it.

    Read the article

  • No MAU required on a T4

    - by jsavit
    Cryptic background One of the powerful features of the T-series servers is its hardware crypto acceleration, which dramatically speeds up the compute intensive algorithms needed to encrypt and decrypt data. Previously, administrators setting up logical domains on older T-series servers had to explicitly assign crypto resources (called "MAU" for historical reasons from the T1 chip that had "modular arithmetic units") to domains that had a significant crypto workload (say, an SSL based web server). This could be an administrative burden, as you had to choose which domains got the crypto units, and issue the appropriate ldm set-mau N mydomain commands. The T4 changes things The T4 is fast. Really fast. Its clock rate and out-of-order (OOO) execution that provides the single-thread performance that T-series machines previously did not have. If you have any preconceptions about T-series performance, or SPARC in general, based on the older servers (which, it must be said, were absolutely outstanding for multi-threaded applications), those assumptions are now obsolete. The T4 provides outstanding. performance for all kinds of workload, as illustrated at https://blogs.oracle.com/bestperf. While we all focused on this (did I mention the T4 is fast?), another feature of the T4 went largely unnoticed: The T4 servers have crypto acceleration "just built in" so administrators no longer have to assign crypto accelerator units to domains - it "just happens". This is way way better since you have crypto everywhere by default without having to manage it like a discrete and limited resource. It's a feature of the processor, like doing an integer add. With T4, there is no management necessary, you just have HW crypto everywhere all the time seamlessly. This change hasn't been widely advertised, and some administrators have wondered why there were unable to assign a MAU to a domain as they did with T2 and T3 machines. The answer is that there is no longer any separate MAU, so you don't have to take any action at all - just leave the default of 0. Summary Besides being much faster than its predecessors, the T4 also integrates hardware crypto acceleration so its seamlessly available to applications, whether domains are being used or not. Administrators no longer have to control how they are allocated - it "just happens"

    Read the article

  • OSB, Service Callouts and OQL

    - by Sabha
    Oracle Fusion Middleware customers use Oracle Service Bus (OSB) for virtualizing Service endpoints and implementing stateless service orchestrations. Behind the performance and speed of OSB, there are a couple of key design implementations that can affect application performance and behavior under heavy load. One of the heavily used feature in OSB is the Service Callout pipeline action for message enrichment and invoking multiple services as part of one single orchestration. Overuse of this feature, without understanding its internal implementation, can lead to serious problems. This series will delve into OSB internals, the problem associated with usage of Service Callout under high loads, diagnosing it via thread dump and heap dump analysis using tools like ThreadLogic and OQL (Object Query Language) and resolving it. The first section in the series will mainly cover the threading model used internally by OSB for implementing Route Vs. Service Callouts. The second section of the "OSB, Service Callouts and OQL" blog posting will delve into thread dump analysis of OSB server and detecting threading issues relating to Service Callout and using Heap Dump and OQL to identify the related Proxies and Business services involved. The final section of the series will focus on the corrective action to avoid Service Callout related OSB serer hangs. Before we dive into the solution, we need to briefly discus about Work Managers in WLS. Please refer to the blog posting for more details.

    Read the article

< Previous Page | 505 506 507 508 509 510 511 512 513 514 515 516  | Next Page >