Search Results

Search found 3912 results on 157 pages for 'distributed caching'.

Page 121/157 | < Previous Page | 117 118 119 120 121 122 123 124 125 126 127 128  | Next Page >

  • Using multiple computers effectively

    - by Benjamin Oakes
    I have some extra (old) Macs and PCs around the house and a MacBook that's sometimes overworked. I'm looking for tips on using multiple computers effectively. Basically, I'd like to add to the following list. Here's what I'm using so far: Teleport: lets you use a single mouse and keyboard to control several Macs, like Synergy Built-in file sharing: lets me run programs on another Mac, but only maintain one copy of the data Bazaar: distributed version control Mail.app, Thunderbird, etc.: IMAP for my mail accounts TuneConnect: control iTunes on another Mac with a nice interface, using the library on my MacBook (if I choose it by pressing option at startup) over file sharing OmniFocus: syncs across computers pretty seamlessly Web browsing across computers VNC/Remote Desktop Running X-windows programs using ssh -Y hostname for headless operation (but they die when I sleep the connecting computer -- something like GNU screen would be ideal) Plain-old ssh with GNU screen Really, a better idea of what I do might be necessary. Generally though, I'd like to distribute tasks across more than one computer when possible, but not have much overhead in doing so. The perfect solution? An Xgrid-like program that pushes processing across multiple computers automatically and seamlessly (although that seems unlikely). Here's what I have, in case it makes a difference: MacBook (Dual 2.16 GHz, OS X 10.6.3) eMac (1.25 GHz, OS X 10.4.11, soon to be 10.5) Dell Dimension (800 MHz, some version of Ubuntu) -- no dedicated monitor PowerMac G3 (400 MHz, OS X 10.4.11) -- no dedicated monitor iMac G3 DV (400 MHz, OS X 10.4.11) -- currently in the kitchen for recipes, email, web browsing, music, movies (DVDs), etc. (Total, they cost me around $650, mostly for the MacBook. Freecycle is wonderful, just in case you haven't heard of it.) I'm really only using the MacBook and eMac at this point, but I'd like to push more onto it and possibly the PowerMac and Dell.

    Read the article

  • suggestions for firewall/router project using *BSD or Linux

    - by Adeodatus
    Hi All, I have a project in mind and I'd love to hear some ideas on some open source solutions with COTS hardware. I have a few 24 and/or 48 port managed layer2 switches with customers potentially on each port (though its usually about 20-30). Right now the switch has a bridged network and backhaul the traffic to our core to a centralized DHCP server. I need to move them to a NAT solution and, while doing this, I'd like to protect the customers on each port from the customer traffic on the other ports. I also need to be able to port forward from the public side of the firewall/nat box to specific hardware on the inside of the nat machine (easy enough, I know). My first thoughts are to build an appliance-like box (the fewer moving parts the better) that can do filtering and NAT with rfc1918 an address range being handed out via a DHCP server on the appliance. A caching DNS server on the appliance would be a plus since we backhaul everything to the core. I'd like to run FreeBSD but I'm open. Now, to try to limit the broadcast traffic thats visible I was thinking of doing each port on the switch as a different vlan and have the switch do trunking to the private NIC on the FreeBSD/appliance. I'd probably need to do some magic on the freebsd NIC to get this working but it should. We have the parts to build these systems. So, does this make sense? Are there any other solutions out there that we don't have to spend money on but can use our parts to create something? Are there any good distros that could do this already (monowall)?? I may or may not admin this solution so a secure web configuration and management tool would be a plus in the other admins' minds. Thoughts?

    Read the article

  • Remote Desktop Session Black after Minimize

    - by TorgoGuy
    PROBLEM: When I minimize a remote desktop session and restore it, the remote desktop screen shows up black. This only happens when connecting to a particular computer. DETAILS: If I start clicking around in the black area, portions of the screen will start redrawing and showing up correctly. For example, if I leave a window open in the remote session and click where that window is located on the remote computer, then that window--and only that window--will redraw, and sometimes a portion of that window won't redraw (usually the toolbar). And to clarify--the window only has to be minimized momentarily, so it doesn't seem to be a timeout issue. Clicking or typing in the remote session still causes the remote computer to respond appropriately. Disconnecting from the session and reconnecting restores the whole screen image, as does clicking all over the place in the black image (causing each section to redraw). CONFIGURATION: This problem only happens for me when connecting to a particular computer (a W2K Server box configured to allow remote administration) and only with certain client computers. I've tried 7 different client computers with various versions of Remote Desktop (the OSes were: Win2K, Server 2003, Server 2008, Windows 7 RC, 3 XP) and two of them exhibit the problem (one is one of the XP boxes and the other is Windows 7). Those same computers can RDP to other computers without problem. RESOLUTION ATTEMPTS: I have tried the following: Disable the LOCAL screen saver as mentioned on Technet Turned off bitmap caching in the client, as mentioned on many forums. Updated to version 6.1 of the remote desktop client Using mRemote (I doubted this would work since it uses MS's code for connecting to RDP servers) Turning off all video acceleration. QUESTION: Any ideas on what is causing this?

    Read the article

  • What is optimal hardware configuration for heavy load LAMP application

    - by Piotr K.
    I need to run Linux-Apache-PHP-MySQL application (Moodle e-learning platform) for a large number of concurrent users - I am aiming 5000 users. By concurrent I mean that 5000 people should be able to work with the application at the same time. "Work" means not only do database reads but writes as well. The application is not very typical, since it is doing a lot of inserts/updates on the database, so caching techniques are not helping to much. We are using InnoDB storage engine. In addition application is not written with performance in mind. For instance one Apache thread usually occupies about 30-50 MB of RAM. I would be greatful for information what hardware is needed to build scalable configuration that is able to handle this kind of load. We are using right now two HP DLG 380 with two 4 core processors which are able to handle much lower load (typically 300-500 concurrent users). Is it reasonable to invest in this kind of boxes and build cluster using them or is it better to go with some more high-end hardware? I am particularly curious how many and how powerful servers are needed (number of processors/cores, size of RAM) what network equipment should be used (what kind of switches, network cards) any other hardware, like particular disc storage solutions, etc, that are needed Another thing is how to put together everything, that is what is the most optimal architecture. Clustering with MySQL is rather hard (people are complaining about MySQL Cluster, even here on Stackoverflow).

    Read the article

  • Is 30 calls / second a lot for one IIS server?

    - by Lieven Cardoen
    We have a RIA application that 300 clients concurrently use in an intranet environment. Together they make 30 calls / second to IIS (asp.net) (actually it's 60 but calls are loadbalanced over two IIS servers). Half of the calls is getting an asset (Caching Profile is used so most of the time cache is hit), the other half is saving data to a sql server. Retrieving an asset is done with a aspx page. Saving the data happens via WebORB, asp.net and Sql Server. So some processing is needed by WebORB (amf decoding, GZIP, ...). We also use Spring.NET, and some of the container objects have a request scope (not a lot). IIS servers -- Virtual machines, 4 CPU, 2 gb RAM. They are based on Windows 2008 x64 SP2 Enterprise Edition. Sql Server 2008 is used. Apparently CPU of both IIS serers is constantly around 60-70%. Now, my question, is the load of 60-70% acceptable and how could we possible bring that down to less % (maybe using only one IIS server)? + Is 2 gb RAM enough? Assets can be up to 20mb, but on average, they are about 30kb. (the load of 60-70% is achieved with assets around 30kb). The data that gets saved with weborb is very small (2kb) and is just one object.

    Read the article

  • HTTP Upload Problems

    - by jfoster
    We are running a marketplace on ColdFusion8 and IIS with a widely geographically distributed user base and have been receiving complaints of issues with some HTTP uploads. Most of the complaints are coming from geographically distant locations from our main datacenter on the US east coast. I've attempted to upload the same 70MB file from a US West coast test server to both our main site and a backup running the same code on a different network route and I saw the same issues fairly consistently in both places, so I've ruled out the code, route, and internal network errors. I've also tested uploads using both the native cf upload tag and a third party tool called SaFileUp. I saw the same issues with both upload tools, so I also don't think this is necessarily a ColdFusion problem. I don't have any problems uploading the test file from the East coast to other east coast servers, so I'm beginning to think that the distance between our users and our equipment is a factor. I've also found that smaller files are more likely to succeed than large ones (< 10MB) I tried the test upload with both IE and FF and did notice a difference in the way that the browsers seemed to handle packet errors. IE seemed to have a tough time continuing an upload after dropped / bad packets, whereas FF seemed to have the ability to gracefully resume an upload after experiencing packet problems. Has anyone experienced similar issues? Is there anything we can do on our side to make uploads more forgiving to packet loss or resumable after an error? A different upload tool etc… Do we need upload servers in more than one location to shorten the network routes between clients and servers? Does anyone think that switching uploads to SSL will help (no layer7 packet sniffing may lead to a smoother upload). Thanks.

    Read the article

  • Symantec Protection Suite Enterprise Edition

    - by rihatum
    We (our company) are planning to deploy Symantec Endpoint Protection and Symantec Desktop Recovery 2011 Desktop Edition to our 3000 - 4000 workstations (Windows7 32 and 64) with a few 100s with Windows XP 32/64 Bit. I have read the implementation guide for SEP and have read tech-notes for Desktop Recovery 2011. Our team have planned to deploy this as follows : 1 x dedicated SQL 2008R2 for Symantec Endpoint Protection (Instead of using the Embedded Database) 1 x Dedicated SQL 2008R2 for Symantec Desktop Recovery 2011 (Instead of using the Embedded Database) 1 x Dedicated W2K8 R2 Box for the SEPM (Symantec Endpoint Protection Manager - Mgmt. APP) 1 x Dedicated W2K8 R2 Box for the Symantec Desktop Recovery 2011 Management Application Agent Deployment : As per Symantec Documentation for both of the above, an agent can be pushed via the Mgmt. Application (provided no firewalls are blocking ports required etc. - we have Windows firewall disabled already). Above is the initial plan we have for 3000 - 4000 client workstation (Windows) Now my Questions :-) a) If we had these users distributed amongst two sites with AD DC / GC in each site, How would I restrict SEPM and Desktop Mgmt. solution to only check for users in their respective site ? b) At present all users are under one building but we are going to move some dept. to a new location (with dedicated connectivity), How would we control which SEPM / MGMT Server is responsible for which site ? c) What Hardware would you recommend as a Server spec for the SQL server 16GB RAM, Dual XEON? d) What Hardware would you recommend as a Server spec for the MGMT Servers 16GB RAM each with DUAL xeon and sas disks? e) Also, how do you or would you recommend to protect these 4 servers (2 x SQL and 2 x MGMT Servers)? f) How would you recommend to store backups for these desktops? We do have a SAN and a NAS in our environment and we do have one spare DAS (Dell MD3000). If you have anything to add / correct - that will be really helpful before diving into the actual implementation phase. Will be most grateful with your suggestions, recommendations and corrections with above - Many Thanks ! Rihatum

    Read the article

  • Raid-5 Performance per spindle scaling

    - by Bill N.
    So I am stuck in a corner, I have a storage project that is limited to 24 spindles, and requires heavy random Write (the corresponding read side is purely sequential). Needs every bit of space on my Drives, ~13TB total in a n-1 raid-5, and has to go fast, over 2GB/s sort of fast. The obvious answer is to use a Stripe/Concat (Raid-0/1), or better yet a raid-10 in place of the raid-5, but that is disallowed for reasons beyond my control. So I am here asking for help in getting a sub optimal configuration to be as good as it can be. The array built on direct attached SAS-2 10K rpm drives, backed by a ARECA 18xx series controller with 4GB of cache. 64k array stripes and an 4K stripe aligned XFS File system, with 24 Allocation groups (to avoid some of the penalty for being raid 5). The heart of my question is this: In the same setup with 6 spindles/AG's I see a near disk limited performance on the write, ~100MB/s per spindle, at 12 spindles I see that drop to ~80MB/s and at 24 ~60MB/s. I would expect that with a distributed parity and matched AG's, the performance should scale with the # of spindles, or be worse at small spindle counts, but this array is doing the opposite. What am I missing ? Should Raid-5 performance scale with # of spindles ? Many thanks for your answers and any ideas, input, or guidance. --Bill Edit: Improving RAID performance The other relevant thread I was able to find, discusses some of the same issues in the answers, though it still leaves me with out an answer on the performance scaling.

    Read the article

  • Using AddEncoding x-gzip .gz without actual files

    - by STATUS_ACCESS_DENIED
    With Apache (2.2 and later) how can I achieve the following. I want to transparently compress using GZip encoding (not plain Deflate) the output when a certain file is queried with its name plus the extension .gz, where the .gz version doesn't physically exist on disk. So let's say I have a file named /path/foo.bar and no file foo.bar.gz in the folder to which the URI /path maps, how can I get Apache to serve the contents of /path/foo.bar but with AddEncoding x-gzip ... applied to the (non-existing) file? The rewrite part appears to be easy, but the problem is how to apply the encoding to a non-existent item. The other way around also seems to be simple as long as the client supports the encoding. Is the only solution really a script that does this on the fly? I'm aware of mod_deflate and mod_gzip and it is not what I'm looking for - at least not alone. In particular I need an actual GZIP file and not just a deflated stream. Now I was thinking of using mod_ext_filter, but couldn't bridge the gap between rewriting the name of the (non-existent) file.gz to file on one side and the LocationMatch on the other. Here's what I have. RewriteRule ^(.*?\.ext)\.gz$ $1 [L] ExtFilterDefine gzip mode=output cmd="/bin/gzip" <LocationMatch "/my-files/special-path/.*?\.ext\.gz"> AddType application/octet-stream .ext.gz SetOutputFilter gzip Header set Content-Encoding gzip </LocationMatch> Note that the header for Content-Encoding isn't really needed by the clients in this case. They expect to see actual GZIP files, but I want to do this on-the-fly without caching (this is a test scenario).

    Read the article

  • Why doesn't apache2 consistently load template fragments from memcached?

    - by Hobhouse
    I run a webserver on an ubuntu box in the rackspacecloud with django 1.0x, apache2/WSGI and memcached 1.2.2. Some of my templates make use of template fragment caching: {% load cache %} {% cache 604800 keyname %} <!-- cache: {% now "H:i, j. b" %} --> {{ my_content }} {% endcache %} When I reload apache2 everything is fine. If keyname is not set, my_content is generated and keyname is set in memcached. After that, my_content is served from memcached. My problem is that after some hours (notably less time than 604800 seconds ), apache2 seems to stop talking to memcached, and my_content is generated from scratch everytime. When this happens I can still set and get keys from memcached from my python shell. Memcached also has more than enough memory to store keys. But to get apache2 to start talking to memcached again I have to restart apache2, and then it will once again start to get the now several hours old keys from memcached. What can be the reason for this behaviour, and how do I fix it?

    Read the article

  • Simple end-to-end load and bottleneck monitoring for DB-based web sites

    - by T.J. Crowder
    What tools do you use / would you recommend for monitoring a Linux-based, DB-based website's servers for bottlenecks and load? The obvious goal being to know when growth has gotten to the point where it's necessary to scale up (or out) one or more of the bits and pieces because the current system won't be managing the load if an observed trend continues. I'm looking for general recommendations based on standard Linux load metrics, disk I/O metrics, network I/O metrics, etc., but if specifics are helpful: It'll be Tomcat6 using APR (possibly with a Varnish or similar caching and balancing front-end), MySQL, and either Ubuntu 8.04 LTS or 10.04 LTS depending on timing. I know about top, vmstat, iostat, bwmon and the like that collect and parse info from the /proc file system (et. al.); and obviously MySQL provides a lot of queriable performance information. I could use those directly, probably automating periodic monitoring logs with scripts and such. But I have a suspicion that I'd be reinventing a wheel... For example, Hyperic HQ seems to be along the lines of what I'm looking for. Others? Meta: I tend to think of "recommendation" questions as needing to be CW because there's no one right answer, but I see a lot of these here that aren't CWs, so I haven't marked it as one. I'll happily do so if enough people think I should.

    Read the article

  • Temporary boot problem after thunder storm - likely causes?

    - by alastairs
    The village where I live was sat under a thunder cloud for most of Friday, and we suffered a few power fluctuations (specifically, what seemed to be split-second outages). When I got back home from work, I found that my PCs had shut down during one of these outages. When I went to boot one of them back up, I couldn't get anything to display on screen, nor did the boot seem to complete correctly. I tried a number of things - unplugging different bits of hardware, swapping graphics adaptors, etc. - to no avail. I thought I was looking at a fried motherboard or CPU. Power seemed to be distributed correctly to the peripherals (the drives all appeared to be working) so I figured it couldn't be the PSU. Eventually I unplugged it from the mains and left it overnight (approx 12hrs unplugged). I tried it again this morning, and it booted up correctly. Woo-hoo! I have all my equipment protected by surge-protected power strips, so I don't think a spike caused these problems. Obviously it has something to do with the power fluctuations, and maybe the PSU in the problem machine got itself confused somehow. The questions are, for future reference and to help people with similar problems: What are the likely causes of the boot failure I experienced? Is a UPS a simple and cost-effective solution, or might other things help prevent this happening in future? What UPS can you recommend (my budget is limited)?

    Read the article

  • On Windows and Windows 7's Task Manager, why Memory is 1118MB Available but only 62MB Free? [closed]

    - by Jian Lin
    Possible Duplicate: Windows 7 memory usage What are the "Cached", "Available", and "Free" memory in the following picture (From Windows 7's Task Manager). If it is 1118MB Available, then why isn't it Free (to use)? As I understand it, if a bowl of noodle is available, that doesn't mean it is free... it may still cost $7. But what about in the Task Manager, when it is Available, it is also not Free? Does it cost $2 per MB? What about the "Cached"... What exactly is the Cached Memory? We may put some hard disk data in RAM and so we cache the data in RAM, for faster access (that's the operating system's job). So the Total Physical RAM is 6GB, what is the 1106 Cached? Cached in where? Caching physical RAM in ... some where? It is also strange that the Cached value is sometimes higher and sometimes lower than the Available value. Can somebody who is knowledgeable about this shred some light on these meanings?

    Read the article

  • Utility to store/cache all web pages and YouTube videos

    - by jonathanconway
    I found myself in the following situation. I'm travelling abroad with my laptop. I connect to a WiFi point and do a bit of browsing and play a YouTube video or two. Then I disconnect and hop on either a plane or taxi. Now I want to go back to some of the webpages I was browsing before and continue reading them, or watch some more of that YouTube video. Unfortunately it seems like none of these resources are cached, or if they are, I have no idea how to access them. Here's what I'd like: A utility that starts when my computer boots and sits in the background, silently caching all the web pages that I view. Not only that, but also the resources such as YouTube videos. Later, when I re-navigate to a site while disconnected, the browser automatically pulls the pages from my cache rather than giving me a 404 error. Or I can click an icon in the system tray and see a list of all the pages/videos in the cache and view any that I like. I'm sure Internet Explorer had a feature like this at some point, like "Offline Mode" or something. But these days it doesn't seem to work. Even when I select that option I still can't view pages that I'm certain I downloaded before. So has the utility I'm talking about been developed yet?

    Read the article

  • Web Server Routing Based On Location

    - by Eric
    I have a website that has users from both Hong Kong and Australia. Unfortunately, since the server is located in Australia, users from Hong Kong are going to suffer latency problems. Traffic has to go through US before travelling back to Australia. So I've setup a server in Hong Kong as well, and users using the .hk TLD are going to be redirected to the Hong Kong web server. It shares the same database server with the Australian server but due to aggressive SQL query caching, impact on performance from latency from SQL queries are negligible. But for users accustomed to the Hong Kong website but have since traveled to Australia, they suffer from additional latency because they go to the .hk site which redirects to the HK server even when they're in Australia. The website is targeted at international students from Hong Kong so this is an significant issue for me. Instead of redirecting users to the closest web server based on the TLD, how do I redirect users based on their location? Currently I am using nginx, postgres and Django. Say I know how to estimate users' location based on users' IP addresses, what is my next step? At what level would I work on? What topic should I read up?

    Read the article

  • Dynamic ARP Entries turning into Static ARP entries

    - by Zach
    I recently acquired a client that has a strange ARP caching issue on one of thier servers. I have a server that will eventually start turning it's dynamic ARP entries into static ARP entries. This causes problems because when the machine that has a static ARP entries on this server receives a new IP via DHCP, then the server is not able to communicate with the clients. Clearing the ARP cache resolves the issue and the server is fine for about a week and then it starts slowly turning ARP entries into static ARP entries. I haven't narrowed it down to when or how many it starts to do, but slowly you start seeing 1 static ARP and then 5 and then 10. The server in question is a Windows Server 2003 SP2. It is a DC, DHCP, and DNS server. I've checked the DHCP scope options and there's nothing in there that would indicate anything to do with static ARP entries. The only thing different between this DNS server and our other DNS server is that the 'Dynamically Update DNA A and PTR records for DHCP clients that do not request updates' is checked on the problematic server. I've done a bit of research about this and it seems that this may happen if any PXE type services are running, from what I can tell, there is nothing running a PXE server. I'm a bit lost as I have never seen dynamic ARP entries start to turn into static ARP entries. Right now my solution is a schedule task that runs every 24 hours to clear the ARP cache (arp -d *). I would like to not rely on this schedule task. Has anybody seen this before or have any suggestions on how to troubleshoot this?

    Read the article

  • Hardware, network infrastructure for runnng gaming server nd on VirtualGL

    - by archer
    Foud nice project VirtualGL (http://www.virtualgl.org/). Tried to run 3D fames (EVE Online, Prototype) on server and display the output on thin client using 100Mbps network. Server: Gentoo Linux on AMD Phoenom II x6 3.4Gz, 8GB RAM, 2x NVIDIA 9800 GTX in single session with display resulution 1024x768 on client. Performance is very promising. Going to increase network speed to 1Gbps (using either Ethernet or Fiber) and run 5-6 clients simultenously. My questions are: a) what would be better for network - 1Gbps Ethernet or Fiber (clients are distributed in max 20m around server)? Is that a must to use managed switch for better network performance? b) Should I increase number of video cards to put in SLI on server (going to use Gigabyte GA-890FXA-UD7 which has 6 PCIExpress slots [2 x4, 2 x8 and 2 x16]). Will it impact performance significantly. If I need to increase the number of video cards - what would be better - put 2 banks of video cards with 3 in bank using SLI, or 3 banks with 2 in the bank? Would linux recognize that and properly use all banks of video cards? c) any suggestions on good thin clients supporting 1920x1080 HDMI video and 1Gbps network I understand that my questions can't be answered clearly (unless someone already managed to use this kind of stuff ;)) although any suggestions would be very helpful.

    Read the article

  • Amazon CloudFront and EC2: Global Load Balancing

    - by Matt Rogish
    We have an app that is going to store and serve up a decent amount of data in S3 to a global audience where latency should be minimized. So, we've been doing tests with Amazon CloudFront and have seen favorable results. However, we need a thin middleware layer (to do security etc.) and we'd like to put that in EC2. Due to security restrictions, this middleware layer will do the file streaming from S3/CloudFront: S3/CloudFront - EC2 - Clients We can geographically distribute the EC2 nodes (US East/West, and Ireland) but the problem is that a client in the EU would hit our US server and be fed data from there, thus rendering much of the performance benefit of CloudFront moot. I've been digging through the EC2 docs but I can't find a built-in way to get a geographically distributed version of EC2 a la CloudFront. Elastic Load Balancing sounds like the way to go, but I can't seem to find a way with that to direct based on routing... Preferably, we'd like to keep the amount of stuff outside of EC2/S3/etc. to a minimum (for obvious reasons). Any ideas how to do that within the EC2/S3 framework? DNS/routing tricks? Thanks!

    Read the article

  • Use Alladin eToken with ThunderBird and other tool

    - by Yurij73
    I'm looking for an example on how to setup the eToken PRO Java device to work with Mozilla Thunderbird and with other Linux tool such as PAM logon. I installed distributed pkiclient-5.00.28-0.i386.RPM from the official product page eToken Pro but that tool only handles importing/exporting certificates on the device. I read a glance an old HOWTO from eToken on Linux, but I couldn't install pkcs11-lib for this device as recommended for Thunderbird use this crypto device. It seems my usb token isn't listed in system, unless lsusb show it, so that is the matter modutil -list -dbdir /etc/pki/nssdb Listing of PKCS #11 Modules NSS Internal PKCS #11 Module Blockquote slots: 2 slots attached Blockquote status: loaded Blockquote slot: NSS User Private Key and Certificate Services Blockquote token: NSS Certificate DB Blockquote CoolKey PKCS #11 Module Blockquote library name: libcoolkeypk11.so Blockquote slots: 1 slot attached Blockquote status: loaded Blockquote slot: AKS ifdh [Main Interface] 00 00 token: is my token absent? on other hand i don't know which module is convenient to Java Pro, does CoolKey does all the job well? It seems Java token is too new hardware for Linux? there is excerpt from /etc/pam_pkcs11.conf #filename of the PKCS #11 module. The default value is "default" use_pkcs11_module = coolkey; screen_savers = gnome-screensaver,xscreensaver,kscreensaver pkcs11_module coolkey { module = libcoolkeypk11.so; description = "Cool Key"`

    Read the article

  • Replicated MongoDB server slower than simple shards

    - by displayName
    I tried to compare the performance of a sharded configuration against a sharded and replicated configuration. The sharded configuration consists of 8 shards each running on three different machines thereby constituting a total of 24 shards. All 8 of these shards run in the same partition on each machine. The sharded and replicated version is 8 shards again just like plain sharding, and all 8 mongods run on the same partition in each machine. But apart from this, each of these three machine now run additional 16 threads on another partition which serve as the secondary for the 8 mongods running on other machines. This is the way I prepared a sharded and replicated configuration with data chunks having replication factor of 3. Important point to note is that once the data has been loaded, it is not modified. So after primary and secondaries have synchronized then it doesn't matter which one i read from. To run the queries, I use an entirely different machine (let's call it config) which runs mongos and this machine's only purpose is to receive queries and run them on the cluster. Contrary to my expectations, plain sharding of 8 threads on each machine (total = 3 * 8 = 24) is performing better for queries than the sharded + replicated configuration. I have a script written to perform the query. So in order to time the scripts, I use time ./testScript and see the result. I tried changing the reading preference for replicated cluster by logging to mongo of config and run db.getMongo().setReadPref('secondary') and then exit the shell and run the queries like time ./testScript. The questions are: Where am i going wrong in the replication? Why is it slower than its plain sharding version? Does the db.getMongo().ReadPref('secondary') persist when i leave the shell and try to perform the query? All the four machines are running Linux and i have already increased the ulimit -n to 2048 from initial value of 1024 to allow more connections. The collections are properly distributed and all the mongods have equal number of chunks. Goes without saying that indices in both configurations are the same.

    Read the article

  • How to deal with the extremely big *.ost files in a Terminal Server environment which is running out of space

    - by Wolfgang Kuehne
    Our Terminal Server is running out of hard disk space, and the major files which occupy most of the space are *.ost files of the Outook, which come form the users which use the Terminal Server all the time through remote desktop. The Outlook is installed on the Terminal Server and various users can use it. What would be a solution in this case. Is there a way to limit the size of the *.ost files? I read in forums that having the Outlook 2010 set up in Cached Exchange Mode isn't the best practice for an environment where the hdd space is a major constraint. First thing that came to my mind is using folder redirection, and place the ost files (together with the AppData forlder) in a network share, but this does not help, because the ost files are saved a part of the AppData folder which can not be redirected. Then I thought if it is possible to limit the size of the ost file? Or limit the time that it keeps emailed cached, say just emails from the last 6 months are sufficient. Another solutions that came to my mind, moving the ost files somewhere else, this required the old ost file to be removed, and creation of a new one. I am not quite sure if the new OST file will still have cached the emails which where available in the old ost, or will it start caching from where the other one left. What do you suggest?

    Read the article

  • virtual disk image - file or partition

    - by tylerl
    I'm looking at the differences between using a file versus a partition to store a virtual disk image in VM use. The common knowledge is that partition-based images are faster than file-based images because of a decreased overhead. It makes sense, but I've never seen any actual numbers. My own testing bears out a different result. When I benchmark a direct-to-partition virtual disk, then format that same partition with ext4, create a virtual disk image stored on that ext4 filesystem, and then benchmark that, I see no speedup at all for the direct-to-partition virtual disk. Instead on some systems the file-based image is even faster (possibly due to host OS caching or something like that). This test was repeated many times on many systems, with fairly consistent results. So perhaps throwing out the performance justification, is it still considered better to use a partition rather than a virtual disk image? Is there some other reason why direct partition access is better than image files? Or perhaps is there some reason to go the other way around? Perhaps an advantage in one of the virtual disk file formats that you don't get with raw partition images?

    Read the article

  • Wget save cookies not working

    - by TrymBeast
    I've been trying to login in the pyload through the web api, but wget is not saving the cookies and I don't understand why. I'm using the following command: wget --delete-after --keep-session-cookies --save-cookies=my_cookies.txt --post-data="username=USERNAME&password=PASSWORD" http://localhost:8000/api/login But the content of my_cookies.txt is: # HTTP cookie file. # Generated by Wget on 2012-06-23 22:31:33. # Edit at your own risk. When I run the same command but in debug mode I get the following output that includes the set cookie in the header response: DEBUG output created by Wget 1.10.2 (Red Hat modified) on linux-gnueabi. --22:31:11-- http://localhost:8000/api/login Resolving localhost... 127.0.0.1 Caching localhost => 127.0.0.1 Connecting to localhost|127.0.0.1|:8000... connected. Created socket 3. Releasing 0x000504d0 (new refcount 1). ---request begin--- POST /api/login HTTP/1.0 User-Agent: Wget/1.10.2 (Red Hat modified) Accept: */* Host: localhost:8000 Connection: Keep-Alive Content-Type: application/x-www-form-urlencoded Content-Length: 32 ---request end--- [POST data: username=USERNAME&password=PASSWORD] HTTP request sent, awaiting response... ---response begin--- HTTP/1.1 200 OK Content-Length: 34 Content-Type: application/json Cache-Control: no-cache, must-revalidate Set-cookie: beaker.session.id=405390ddc809efed54820638c95d7997; expires=Tue, 19-Jan-2038 04:14:07 GMT; Path=/ Connection: Keep-Alive Date: Sat, 23 Jun 2012 21:31:11 GMT Server: CherryPy/3.1.2 WSGI Server ---response end--- 200 OK hs->local_file is: login (not existing) Registered socket 3 for persistent reuse. TEXTHTML is on. Length: 34 [application/json] Saving to: `login' 100%[=======================================>] 34 --.-K/s in 0s 22:31:11 (1.28 MB/s) - `login' saved [34/34] Removing file due to --delete-after in main(): Removing login. Saving cookies to my_cookies.txt. Done saving cookies. Can anyone tell me what am I doing wrong? Thanks in advance!

    Read the article

  • Setting kernel memory for installing postgresql

    - by Matthieu Taymans
    My question is about setting the kernel shared memory for installing postgresql on mac osx 10.6.8. In the readme file of postgresql it is said: Shared Memory PostgreSQL uses shared memory extensively for caching and inter-process communication. Unfortunately, the default configuration of Mac OS X does not allow suitable amounts of shared memory to be created to run the database server. Before running the installation, please ensure that your system is configured to allow the use of larger amounts of shared memory. Note that this does not 'reserve' any memory so it is safe to configure much higher values than you might initially need. You can do this by editting the file /etc/sysctl.conf - e.g. % sudo vi /etc/sysctl.conf On a MacBook Pro with 2GB of RAM, the author's sysctl.conf contains: kern.sysv.shmmax=1610612736 kern.sysv.shmall=393216 kern.sysv.shmmin=1 kern.sysv.shmmni=32 kern.sysv.shmseg=8 kern.maxprocperuid=512 kern.maxproc=2048 Note that (kern.sysv.shmall * 4096) should be greater than or equal to kern.sysv.shmmax. kern.sysv.shmmax must also be a multiple of 4096. Once you have edited (or created) the file, reboot before continuing with the installation. If you wish to check the settings currently being used by the kernel, you can use the sysctl utility: % sysctl -a The database server can now be installed. I'm a real beginner with all this but need to instal postgresql for academic purposes do you know how i can set this kernel shared memory. Won't that be harmful for my system? Thank you in advance. Matthieu

    Read the article

  • Performance: Nginx SSL slowness or just SSL slowness in general?

    - by Mauvis Ledford
    I have an Amazon Web Services setup with an Apache instance behind Nginx with Nginx handling SSL and serving everything but the .php pages. In my ApacheBench tests I'm seeing this for my most expensive API call (which cache via Memcached): 100 concurrent calls to API call (http): 115ms (median) 260ms (max) 100 concurrent calls to API call (https): 6.1s (median) 11.9s (max) I've done a bit of research, disabled the most expensive SSL ciphers and enabled SSL caching (I know it doesn't help in this particular test.) Can you tell me why my SSL is taking so long? I've set up a massive EC2 server with 8CPUs and even applying consistent load to it only brings it up to 50% total CPU. I have 8 Nginx workers set and a bunch of Apache. Currently this whole setup is on one EC2 box but I plan to split it up and load balance it. There have been a few questions on this topic but none of those answers (disable expensive ciphers, cache ssl, seem to do anything.) Sample results below: $ ab -k -n 100 -c 100 https://URL This is ApacheBench, Version 2.3 <$Revision: 655654 $> Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/ Licensed to The Apache Software Foundation, http://www.apache.org/ Benchmarking URL.com (be patient).....done Server Software: nginx/1.0.15 Server Hostname: URL.com Server Port: 443 SSL/TLS Protocol: TLSv1/SSLv3,AES256-SHA,2048,256 Document Path: /PATH Document Length: 73142 bytes Concurrency Level: 100 Time taken for tests: 12.204 seconds Complete requests: 100 Failed requests: 0 Write errors: 0 Keep-Alive requests: 0 Total transferred: 7351097 bytes HTML transferred: 7314200 bytes Requests per second: 8.19 [#/sec] (mean) Time per request: 12203.589 [ms] (mean) Time per request: 122.036 [ms] (mean, across all concurrent requests) Transfer rate: 588.25 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 65 168 64.1 162 268 Processing: 385 6096 3438.6 6199 11928 Waiting: 379 6091 3438.5 6194 11923 Total: 449 6264 3476.4 6323 12196 Percentage of the requests served within a certain time (ms) 50% 6323 66% 8244 75% 9321 80% 9919 90% 11119 95% 11720 98% 12076 99% 12196 100% 12196 (longest request)

    Read the article

< Previous Page | 117 118 119 120 121 122 123 124 125 126 127 128  | Next Page >