Search Results

Search found 5472 results on 219 pages for 'jack low'.

Page 12/219 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • Active Directory management with low user rights

    - by DemonWareXT
    Our problem: The client, a normal user, has to be able to reset multiple passwords at once. Around 30 in one go. This would call for powershell or something along these lines, but for AD and Powershell one needs to be domain administrator. My solution would be to make a service that runs on the AD server and take connections from a program. The service would then do the AD changes. So far so good, I would just like to hear some other thoughts on this problem. Because I sure can't be the only one with it

    Read the article

  • Starting VMs with an executable with as low overhead as possible

    - by Robert Koritnik
    Is there a solution to create a virtual machine and start it by having an executable file, that will start the machine? If possible to start as quickly as possible. Strange situation? Not at all. Read on... Real life scenario Since we can't have domain controller on a non-server OS it would be nice to have domain controller in an as thin as possible machine (possibly Samba or similar because we'd like to make it startup as quickly as possible - in a matter of a few seconds) packed in a single executable. We could then configure our non-server OS to run the executable when it starts and before user logs in. This would make it possible to login into a domain.

    Read the article

  • Internet compression proxy for low speed broadband?

    - by user23150
    I live in a rural location, using high-latency wireless off a local ISP's tower. My speed tests vary day to day, but I can get around 1Mb up/down. The problem is, I work with large files, uploading and downloading (HD videos, development software, etc.). It can be painful to wait sometimes. Plus I do some side contract game development, and it can be very difficult to playtest with other developers (200ms ping is a good day for me). Now, obviously it's not going to be easy to solve the latency problem without different wireless hardware. But speedwise, I am wondering if I can use some kind of compression technology on a proxy. For instance, my work computer has full access to a 26Mb down, 10Mb up connection, that is totally unused at night and the weekends. If I could run some kind of compression technology on our server, and use it as a proxy to route to my home computer, I could stand to gain some major speed. I realize that by bogging down a system with compression, I could potentially lose whatever speed gain I had. But the proxy server is a quad core xeon, and the receiving computer is a pretty decent i7 computer, so that shouldn't be a concern. I found http://toonel.net/ but it seems more geared toward very slow narrowband users, like dial-up. Plus, I would prefer to just be able to point my browser to a proxy server, rather then install software on my client machine. EDIT I thought about my question a little more, and realize I am going to need to install software on my client in order to decompress, and possible compress (for uploading). That's not a huge deal.

    Read the article

  • Low FPS in some games, but hardware not fully used

    - by Mario De Schaepmeester
    I just did a little funny experiment in the game/sim "Train Simulator 2013". I normally have good FPS in it (around 30) at full settings. What I did was make a really, really long train so that the calculations the sim needed to make were enormous (the sim is quite realistic, it takes all things into account like speed/acceleration, G-forces, comfort levels, possible wheel slip and many more, and most of those things on each carriage seperately). This resulted in only 14FPS as reported by the game, but it felt more like 8FPS or so. I have a Logitech G15 keyboard which has an LCD, and it allows me to monitor CPU/RAM and video card load on it. The strange thing is, all CPU cores were busy, but the total load was only about 60% maximum at all times. The video card was only on 30% load (possibly an important note, the memory was full, which is however not unusual for the game in question). The RAM had plenty of room and there weren't many operations as it didn't grow or shrink much. I just have the feeling that the game would run smoother if it used more of my hardware power. Why is it not doing so? I had the same in another game, The Elder Scrolls: Morrowind when using more than 100 mods (that all use scripting) and a few high res texture mods, + a full-on graphics improvement program. The engine is very old (2003), and so I thought this might be the cause (not being optimised for multithreading). I had thought of possible causes, like: The operating system doesn't let the games use all the resources. It doesn't make use of multi-threading appropriately. To eliminate the former, I tried a CPU stress tool and that got 100% CPU juice as I let it run, so the OS is not the problem. I gave its thread the "higher" priority though. My actual question In both games, I did things the engine was not really built to do or support. Can those games' framerate be limited cause of their own engine not being able to cope? What is the real reason and more importantly, can I help it? And in any case, could something actually be wrong with my hardware? It's all reasonably new, a couple of months, and I (almost) never experience any other trouble. Modern and much more demanding games work absolutely fine. Specs CPU: AMD Phenom II 965 X4 @ 3.4gHz RAM: 8GB of DDR3 RAM Video: MSI GTX560 (nVidia chip) with 1GB of GDDR5 memory OS: Windows 7 Ultimate 64 bit Nothing overclocked.

    Read the article

  • Starting VM as an executable with as low overhead as possible

    - by Robert Koritnik
    Is there a solution to create a virtual machine and start it by having an executable file, that will start the machine? If possible to start as quickly as possible. Strange situation? Not at all. Read on... Real life scenario Since we can't have domain controller on a non-server OS it would be nice to have domain controller in an as thin as possible machine (possibly Samba or similar because we'd like to make it startup as quickly as possible - in a matter of a few seconds) packed in a single executable. We could then configure our non-server OS to run the executable when it starts and before user logs in. This would make it possible to login into a domain.

    Read the article

  • VPS showing low disk space despite there is nothing major on it

    - by SheoNarayyan
    Hello experts, On my VPS server I was trying to see the used disk space and when I open My Computer it shows 17.9 GB free out of 39.8 GB it means that 21.9 GB space is used. However, when I select all files and folders from C: and try to see the total size, it just count approximately 11 GB. The difference is around 10 GB. Where is this 10 GB going if I have not stored anything else here? I asked above question from my VPS provider and he responded below Check hidden files/system files/etc. This is default windows OS and its utilization and not specific to setup. If you want specifics of usage, you can go ahead and get in touch with Microsoft support team and they'll provide you with exact specification of the same. I am sure that Windows OS must not be taking up 10 GB space for hidden files and folders. My VPS has Windows Server 2008 R2 installed. Can anyone help me in this on who is right?

    Read the article

  • We want to setup low cost private cloud [closed]

    - by Virtual Jasper
    We are a small company with very limit funds. In order to improve our server reliability, we are studying to migrate to CLOUD. We seen some CLOUD provider, they would charge by resources such as, CPU, RAM.....Disk space....High Availability....etc. We have server team, so we also consider to built the private CLOUD, we seen the Windows 8 server, it does need license fee. So we looking at Linux side, we look at Ubuntu and OpenStack. What is the different between Ubuntu and OpenStack solutions? Is it both free on software license? and only to pay the technical support.

    Read the article

  • Low 'Burst Rate' from SATA drive in HDTune?

    - by UpTheCreek
    I recently upgraded my laptop's v slow hard drive to a seagate momentus 7200. Everything is working fine, but I'm a bit confused by these benchmark results: The burst rate is significantly less than the Maximim transfer rate, and not much higher than the normal minimum (if you ignore the spikes). What's going on here? On the HDtune website it defines Burst Rate as: ...the highest speed (in megabytes per second) at which data can be transferred from the drive interface (IDE or SCSI for example) to the operating system. Which begs some questions... e.g. if this is the highest, then how did the bechmarking tool record the 103MB/sec maximum? And if this really is the true maximum, then where is the bottleneck? The laptops SATA interface is on an Intel 82801GBM southbridge controller. When I check in hardware manager, I see that it's driver is iaStor.sys from 2005. Maybe that's the issue? I'll look for a newever version, but any insights would be appreciated. Thanks UPDATE: Acorting to this page on the HDTune website... An important parameter of the test is the Burst Rate. This value should always be higher than the maximum transfer rate. A lower value is usually an indication of a configuration problem. So what might be the configuration problem?

    Read the article

  • Nginx and low-speed connections: request terminates after 253 seconds

    - by meze
    I'm trying to make nginx to handle static files. All is working fine except that when I throttle my connection speed to 8kbit/s, the loading process of a file just stops after 253-255 seconds (4.2 min according to chrome). No error in the log, the status code is 200 but the response is received partially. If I disable nginx and make apache to send the same file - it loads successfully after 10 minutes. The config I use for debugging is: client_header_buffer_size 16k; large_client_header_buffers 4 8k; client_max_body_size 50m; client_body_buffer_size 16k; client_header_timeout 20m; client_body_timeout 20m; send_timeout 20m; Did i miss some configurations?

    Read the article

  • Auto restart server if virtual memory is too low

    - by Sukhjinder Singh
    There are quite number of software running on my server: httpd, varnish, mysql, memcache, java.. Each of them is using a part of the virtual memory and varnish was configured to be allocated 3GB of memory to run. Due to high traffic load which is 100K, our server ran out of memory and oom-killer is invoked. We've to reboot the server. We have 8GB of Virtual Memory and due to some reason we cannot extend to larger memory. My question is - Is there any automated script, which will monitor how much virtual memory left and based upon certain criteria, lets say if 500MB left than restart the server automatically? I do know this is not the proper solution but we have to do it, otherwise we don't know when server will get OOM and by the time we know and restart the server, we lost our visiting users.

    Read the article

  • Windows Server 2008 Alerting to Low memory

    - by t1nt1n
    I have a file and print server running on Windows 2008 R2 fully patched in a VSphere environment (ESXi 5.1 fully updated). Every evening between 19:20 and 19:30 our monitoring software reported that the available memory is 1% and performance is dire. There is nothing in the event logs to point to an issue. At this point in the evening I am general the only user on the system to check to see why these alerts are going off. Things I have done; Checked to see if any backups are running – None at all. Checked Scheduled tasks – None before or during this time period. Moved the VM to another host. AV is disabled to rule out that as the issue. The server does not have any problems during the day with memory when fully loaded with about 50 users. The server did have 4GB ram provisioned but I have increased this to 5Gb. Running PrefMon at the time (I will save the graphs tonight) There very little CPU usage at the time but RAM usage goes up.

    Read the article

  • Threads, Sockets, and Designing Low-Latency, High Concurrency Servers

    - by lazyconfabulator
    I've been thinking a lot lately about low-latency, high concurrency servers. Specifically, http servers. http servers (fast ones, anyway) can serve thousands of users simultaneously, with very little latency. So how do they do it? As near as I can tell, they all use events. Cherokee and Lighttpd use libevent. Nginx uses it's own event library performing much the same function of libevent, that is, picking a platform optimal strategy for polling events (like kqueue on *bsd, epoll on linux, /dev/poll on Solaris, etc). They all also seem to employ a strategy of multiprocess or multithread once the connection is made - using worker threads to handle the more cpu intensive tasks while another thread continues to listen and handle connections (via events). This is the extent of my understanding and ability to grok the thousand line sources of these applications. What I really want are finer details about how this all works. In examples of using events I've seen (and written) the events are handling both input and output. To this end, do the workers employ some sort of input/output queue to the event handling thread? Or are these worker threads handling their own input and output? I imagine a fixed amount of worker threads are spawned, and connections are lined up and served on demand, but how does the event thread feed these connections to the workers? I've read about FIFO queues and circular buffers, but I've yet to see any implementations to work from. Are there any? Do any use compare-and-swap instructions to avoid locking or is locking less detrimental to event polling than I think? Or have I misread the design entirely? Ultimately, I'd like to take enough away to improve some of my own event-driven network services. Bonus points to anyone providing solid implementation details (especially for stuff like low-latency queues) in C, as that's the language my network services are written in.

    Read the article

  • UPS vs Solar Power in case of power failure for a server [on hold]

    - by Zen 8000k
    I am looking for a low power, low end pc able to run 24/7 without overheating and a way to support it in case of power failure. Power failures can be up to 72 hours. The pc dosen't need a monitor or keyboard. A modem must also be protected in case of power failure. When i say low end, i don't mean crap. The cpu needs to be x86 and have at least 1k cpu in this chart: http://www.cpubenchmark.net/index.php What's the best way to do this? EDIT: more info. I need to run a home server. The server will perform light tasks mainly. A x86 cpu sadly is the only route for my use. I want to be able to run the server and the router/modem in case of power failure. Now, regarding how long the power will fail: 1) 1 hours is OK for most situations. (say 90%) 2) 3 hours is OK (say 98%) 3) 6 hours is more thank OK. (say 99.5%) 4) On extreme cases the power might fail days. I believe this is very unlikely to happen. More is great but, really, how ofter power will fail more than 3 hours? I believe once every year at best. Well, that's too rare to care about. Given the above, I am looking for a cost effective way to archive 1-3 hour power or 6 hour if possible. Solutions: You guys give me great ideas. 1) Power generator: no good as power will fail for 10 seconds before returning. Also I read online, "clean" power generators cost 1.5k+, so it's out of budged. Non clean generator might damage electronics, right? 2) Solar power: i don't know for sure about this. Sounds like a great idea, too good to be true, honestly. For only 200$ i get 100+w? What are the drawbacks here? 3) UPS: This seems to be the best. The only problem is the cost. Cost < 200$ = great 400$ = budged limit

    Read the article

  • Java: Send BufferedImage through Socket with a low bitdepth

    - by Martijn Courteaux
    Hi, The title says enough I think. I have a full quality BufferedImage and I want to send it through an OutputStream with a low bitdepth. I don't want an algorithm to change pixel by pixel the quality, so it is still a full-quality. So, the goal is to write the image (with the full resolution) through the OuputStream with a very small size. Thanks, Martijn

    Read the article

  • AVCam memory low warning

    - by Red Nightingale
    This is less a question and more a record of what I've found around the AVCam sample code provided by Apple for iOS4 and 5 camera manipulation. The symptoms of the problem for me were that my app would crash on launching the AVCamViewController after taking around 5-10 photos. I ran the app through the memory leak profiler and there were no apparent leaks but on inspection with the Activity Monitor I discovered that something called mediaserverd was increasing by 17Mb every time the camera was launched and when it reached ~100Mb the app crashed with multiple low memory warnings.

    Read the article

  • Matlab - applying low-pass filter to a vector?

    - by waitinforatrain
    If I have a simple low-pass filter, e.g. filt = fir1(20, 0.2); and a matrix with a list of numbers (a signal), e.g. [0.1, -0.2, 0.3, -0.4] etc, how do I actually apply the filter I've created to this signal? Seems like a simple question but I've been stuck for hours. Do I need to manually calculate it from the filter coefficients?

    Read the article

  • Converting WAV to MP3 on Linux with low bitrates

    - by Olly
    I need to convert WAV files to MP3 files so they can be played on a website. I think that LAME would probably be the best tool. However the WAV files are low bitrate (around 8kbits recorded from a phone) and LAME's website states that it is the "best MP3 encoder at mid-high bitrates and at VBR". Is there is a better encoder for lower bitrates? If so can you define "better"?

    Read the article

  • procedure that swaps the bytes (low/high) of a Word variable

    - by Altar
    Hi. I have this procedure that swaps the bytes (low/high) of a Word variable (It does the same stuff as System.Swap function). The procedure works when the compiler optimization is OFF but not when it is ON. Can anybody help me with this? { UNSAFE! IT IS NOW WORKING WHEN COMPILER OPTIMIZATION IS ON ! } procedure SwapWord_NotWorking(VAR TwoBytes: word); asm Mov EBX, TwoBytes Mov AX, [EBX] XCHG AL,AH Mov [EBX], AX end;

    Read the article

  • Persistence scheme & state data for low memory situations (iphone)

    - by Robin Jamieson
    What happens to state information held by a class's variable after coming back from a low memory situation? I know that views will get unloaded and then reloaded later but what about some ancillary classes & data held in them that's used by the controller that launched the view? Sample scenario in question: @interface MyCustomController: UIViewController { ServiceAuthenticator *authenticator; } -(id)initWithAuthenticator:(ServiceAuthenticator *)auth; // the user may press a button that will cause the authenticator // to post some data to the service. -(IBAction)doStuffButtonPressed:(id)sender; @end @interface ServiceAuthenticator { BOOL hasValidCredentials; // YES if user's credentials have been validated NSString *username; NSString *password; // password is not stored in plain text } -(id)initWithUserCredentials:(NSString *)username password:(NSString *)aPassword; -(void)postData:(NSString *)data; @end The app delegate creates the ServiceAuthenticator class with some user data (read from plist file) and the class logs the user with the remote service. inside MyAppDelegate's applicationDidFinishLaunching: - (void)applicationDidFinishLaunching:(UIApplication *)application { ServiceAuthenticator *auth = [[ServiceAuthenticator alloc] initWithUserCredentials:username password:userPassword]; MyCustomController *controller = [[MyCustomController alloc] initWithNibName:...]; controller.authenticator = auth; // Configure and show the window [window addSubview:..]; // make everything visible [window makeKeyAndVisible]; } Then whenever the user presses a certain button, 'MyCustomController's doStuffButtonPressed' is invoked. -(IBAction)doStuffButtonPressed:(id)sender { [authenticator postData:someDataFromSender]; } The authenticator in-turn checks to if the user is logged in (BOOL variable indicates login state) and if so, exchanges data with the remote service. The ServiceAuthenticator is the kind of class that validates the user's credentials only once and all subsequent calls to the object will be to postData. Once a low memory scenario occurs and the associated nib & MyCustomController will get unloaded -- when it's reloaded, what's the process for resetting up the 'ServiceAuthenticator' class & its former state? I'm periodically persisting all of the data in my actual model classes. Should I consider also persisting the state data in these utility style classes? Is that the pattern to follow?

    Read the article

  • Tuning garbage collections for low latency

    - by elec
    I'm looking for arguments as to how best to size the young generation (with respect to the old generation) in an environment where low latency is critical. My own testing tends to show that latency is lowest when the young generation is fairly large (eg. -XX:NewRatio <3), however I cannot reconcile this with the intuition that the larger the young generation the more time it should take to garbage collect. The application runs on linux, jdk 6 before update 14, i.e G1 not available.

    Read the article

  • Low delay audio on Android via NDK

    - by hkhauke
    Hi, it seems that this question has been asked before, I just would like to know whether there is an update in Android. I plan to write an audio application involving low delay audio I/O (appr. < 10 ms). It seems not to be possible based on the methods proposed by the SDK, hence is there - in the meantime - a way to achieve this goal using the NDK? Best regards, HK

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >