Search Results

Search found 25440 results on 1018 pages for 'agent based modeling'.

Page 798/1018 | < Previous Page | 794 795 796 797 798 799 800 801 802 803 804 805  | Next Page >

  • Is Gmail Being Blocked by my ISP (wait till you read this)?

    - by James
    This is the strangest thing I have ever encountered. I have a desktop on which I cannot access Gmail and also youtube sign in (I believe since youtube is owned by google they both use the same sign in system). So okay, maybe my ISP is blocking these for some reason or maybe my firewall is, or maybe there is something wrong with my connectivity, right? NO. On other computers that uses the same connection via a wireless router I can access both gmail and youtube sign in just fine. On this computer which doesn't have a wireless card and so I have to connect via Ethernet cable (connected to a USB converter since the Ethernet port doesn't work anymore) I can access all sites and services including things like aol and hotmail. But only when it comes to gmail, do I get complete and utter throttling. I even turned off my AV ad Firewall momentarily and no luck. The gmail ages starts to load and by mid point it just stays there loading and loading and loading... never ends. I tried everything, I reset the modem and router multiple times. I reinstalled my operating system from a vista to a windows 7 hoping a complete reinstall would solve the issue, but no luck. So can anyone for the life of them figure out why this could be? And yes, I am going to call my ISP but not to solve this issue, but to cancel them. I want to upgrade to cabel from DSL anyway. I didn't mention my ISP because I'm not sure if that is within the rules (if it's okay some one let me know and I will). P.S. All this happened one day, before gmail was perfectly accessible in this computer. I can't remember anything special that happened on that day prior to this. The only thing I can think of is, my ISP or Google itself is blocking this computer based on it's mac address, but I don't know if that's even done. Additional info: PC: Windows 7 Ultimate 32 bit Connection Type: DSL Connecting Medium: Ethernet cable via USB converter

    Read the article

  • App for family tech support tracking?

    - by slothbear
    I do tech support for several groups within my family. They usually have a document or notebook of questions for me. They often record my advice, but then ask me again later. Some communications are by email (nice record for me, although they never think to search). Some sessions are in person, usually with a followup email from me for the record. Which they forget about. I'm not trying to force them to be more 'professional', but I would like to streamline my support a bit, and give them a place to look for past answers. Some of them would like a standard place like that, rather than reasking me the same questions. The solution has to be free. And web-based, although email-in for questions would be great. I'll be doing most (all?) updating of the system. Mobile/iPhone access would be nice, but not required. Ideally, a system with topics and responses would be good, but I'd need a way to promote one response as 'the answer'.

    Read the article

  • Exchange 2010 DAG + VMWare HA = no support?

    - by Dan
    We currently have an Exchange 2003 clustered environment (two machine cluster) that we're looking to upgrade to 2010. We recently purchased a VMWare virtualization environment (three Dell R710's with an EMC NS-120 serving up NFS datastores - iSCSI is available) that we wish to use for this new environment. I'm seeing that Microsoft does not support Exchange 2010 DAGs with a virtualization high availability solution (see links below). I would like to utilize the DAG to ensure the data stays available if one host goes down, and HA to ensure that if the physical host goes down, the VM will come back up on the other available host. Does anybody know why MS does not support this? VMWare HA will only restart the VM if it is hung/down - I don't see any difference between this and restarting the physical box if someone pulled the power... Will we only run into issues with support if it has something to do with HA/DAG failover or will they see we have HA and tell us to put it on a physical box even if it has nothing to do with HA? If we disable HA for these VM's will that satisfy them on a support case? Has anybody set up an Exchange 2010 DAG on VMware with HA enabled? Will they have any issues with using an NFS datastore? We have much greater flexibility on the EMC with NFS vs iSCSI, so I would prefer to continue utilizing that. Thanks for any input! http://www.vmwareinfo.com/2010/01/verifying-microsoft-exchange-2010.html Take a look at the second image under "Not Supported" http://technet.microsoft.com/en-us/library/aa996719.aspx "Microsoft doesn't support combining Exchange high availability solutions (database availability groups (DAGs)) with hypervisor-based clustering, high availability, or migration solutions. DAGs are supported in hardware virtualization environments provided that the virtualization environment doesn't employ clustered root servers."

    Read the article

  • Cannot access server shares over VPN

    - by DuncanDavies
    I've set up a single hosted server to use as a development environment for a web-based application. The web app is served up fine on port 80, however I'm struggling to get my VPN to behave how I'd expect so the developers don't have the access they require. The VPN connects fine and I can access the back-end database (SQL Server) which resides on the server with the client tools from the laptops. However they cannot access any shared folders. The server's local IP address is 10.x.x.x, and I've assigned a static IP address pool to RRAS (of 192.168.100.1 - 20). The clients pick up a valid IP Address (i.e. 192.168.100.9) when they connect. There is no name resolution setup, DNS or WINS. When connected via VPN the clients can ping the server (192.168.100.1) by IP Address, but cannot map a drive to a shared folder (net use * \\192.168.100.1\xxxxx) - I get 'System error 53 has occurred. The network path was not found.' I don't understand why I can ping by the ip, but not map by it. Some details: Server OS is Windows 2008 (Datacenter) VPN is SSTP using RRAS Clients are all Windows 7 I've tried temporarily disabling the firewalls So, why can we not access the file system when everything else (ping, RDP, SQL Server clients tools) works? Thanks for your help Duncan

    Read the article

  • Server nearly unusable when doing disk writes

    - by Wikser
    My question closely relates to my last question here on serverfault. I was copying about 5GB from a 10 year old desktop computer to the server. The copy was done in Windows Explorer. In this situation I would assume the server to be bored by the dataflow. But as usual with this server, it really slowed down. At least I could work with the remote session, even there was some serious latency. The copy took its time (20min?). In this time I went to a colleague and he tried to log in in the same server via remote desktop (for some other reason). It took about a minute to get to the login screen, a minute to open the control panel, a minute to open the performance monitor, ... Icons were loading maybe one per second. We saw the following (from memory): CPU: 2% Avg. Queue Length: 50 Pages/sec: 115 (?) There was no other considerable activity on the server. The server seldom serves some ASP.NET pages, which became also very slow in this time. The relevant configuration is as follows: Windows 2003 SEAGATE ST3500631NS (7200 rpm, 500 GB) LSI MegaRAID based RAID 5 4 disks, 1 hot spare Write Through No read-ahead Direct Cache Mode Harddisk-Cache-Mode: off Is this normal behaviour for such a configuration? What measurements could give further clues? Is it reasonable to reduce the priority of such copy I/O and favour other processes like the remote desktop? How would you do that? Many thanks!

    Read the article

  • iptables forwarding to a dummy interface

    - by madinc
    Hi, I'm trying to accomplish the following: I have a box with a service listening on a dummy interface (say 172.16.0.1), udp port 5555. Now what I'd like to do is to take packets that arrive on interfaces eth0 (1.1.1.1:5555) and eth1 (2.2.2.2:5555) and forward them to the service on the dummy interface, and have replies go back to clients out the same physical interface they came in. Clients must think they're talking to 1.1.1.1:5555 or 2.2.2.2:5555. I think I need a mix of iptables rules and packet marking, plus some iproute rules (if it's possible at all). What I tried is to catch packets coming in from eth0 and eth1, udp port 5555, and mark them with 1 and 2 respectively, and --save-mark in the connmark. Then I used a DNAT to 172.16.0.1. The service seems to be getting the packets. Now I'm not sure how to do the reverse. It seems that for packets originating from the box, you can't do anything before the routing decision, but that would be the place to restore the marks, and thus make a routing decision based on those. Here's what I have so far: iptables -t mangle -A PREROUTING -d 1.1.1.1 -p udp --port 5555 -j MARK --set-mark 1 iptables -t mangle -A PREROUTING -d 2.2.2.2 -p udp --port 5555 -j MARK --set-mark 2 iptables -t mangle -A PREROUTING -d 1.1.1.1 -p udp --port 5555 -j CONNMARK --save-mark iptables -t mangle -A PREROUTING -d 2.2.2.2 -p udp --port 5555 -j CONNMARK --save-mark iptables -t nat -A PREROUTING -m mark --mark 1 -j DNAT --to-destination 172.16.0.1 iptables -t nat -A PREROUTING -m mark --mark 2 -j DNAT --to-destination 172.16.0.1 # What next? As I said, I'm not even sure it can be done. To give a bit of background, it's an old OpenVPN installation that cannot be upgraded (otherwise I'd install a recent version that supports multihoming natively). Thanks for any help.

    Read the article

  • Webserver max CPU when apache and MYSQL are ran together

    - by Tim
    This website has been running fine without issues, Recently it went down. After some investigation it looks like the combo of MYSQL and Apache bring the box to its knees. Apache can run find serving static web pages and MYSQL can run fine when the website isn't working. As soon as the website is enabled with SQL running the CPU on the box remains at 100%. Picture of the usage: http://i.stack.imgur.com/GG2NC.png I've checked the sql database for errors, tried tuning nearly every parameter in apache/sql's conf file for performance. The server is a redhat based box running the latest software packages. Any help/suggestions are welcome. Doing an strace on a high cpu apache process I see the following: read(14, "", 8192) = 0 close(14) = 0 socket(PF_FILE, SOCK_STREAM, 0) = 14 fcntl64(14, F_SETFL, O_RDONLY) = 0 fcntl64(14, F_GETFL) = 0x2 (flags O_RDWR) connect(14, {sa_family=AF_FILE, path="/var/lib/mysql/mysql.sock"...}, 110) = 0 setsockopt(14, SOL_SOCKET, SO_RCVTIMEO, "\2003\341\1\0\0\0\0", 8) = 0 setsockopt(14, SOL_SOCKET, SO_SNDTIMEO, "\2003\341\1\0\0\0\0", 8) = 0 setsockopt(14, SOL_IP, IP_TOS, [8], 4) = -1 EOPNOTSUPP (Operation not supported) setsockopt(14, SOL_SOCKET, SO_KEEPALIVE, [1], 4) = 0 Here is what I see from a mysql process: futex(0x86fc9a4, FUTEX_WAIT_PRIVATE, 39, NULL) = 0 futex(0x86fc734, FUTEX_WAIT_PRIVATE, 2, NULL) = 0 futex(0x86fc734, FUTEX_WAKE_PRIVATE, 1) = 0 gettimeofday({1301465020, 141613}, NULL) = 0 clock_gettime(CLOCK_REALTIME, {1301465020, 141699633}) = 0 futex(0x8707a64, FUTEX_WAIT_PRIVATE, 1, {4, 999913367}) = 0 futex(0x8707a40, FUTEX_WAIT_PRIVATE, 2, NULL) = 0 futex(0x8707a40, FUTEX_WAKE_PRIVATE, 1) = 0 exit_group(0) = ?

    Read the article

  • Server 2012, Jumbo Frames - should I expect problems?

    - by TomTom
    Ok, this sound might stupid - but is there any negative on just enabling jumbo frames in practice? From what I understand: Any switch or ethernet adapter that sees a jumbo frame it can not handle will just drop it. TCP is not a problem as max frame size is negotiated in the setinuo phase. UCP is a theoretical problem as a server may just send a LARGE UDP packet that gets dropped on the way. Practically though, as UDP is packet based, I do not really think any software WOULD send a UDP packet larger than 1500 bytes net without app level configuration changes - at least this is how I do my programming, as it is quite hard to get a decent MTU size for that without testing yourself, so you fall back in programming to max 1500 packets. The network in question is a standard small business network - we upgraded now from a non managed 24 port switch to a 52 port switch with 4 10g ports (netgear - quite cheap) and will mov a file server to 10g for also ISCSI serving. All my equipment on the Ethernet level can handle minimum 9000 bytes and due to local firewalls I really want to get packets larger (less firewall processing), but the network is also NAT'ed to the internet. On top, different machines move around (download) large files (multi gigabyte area) quite often for processing. The question is - can I expect problems when I just enable jumbo frames? Again, this is not totally ignorance - I just don't see programs sending more than 1500 byte UDP packets (if that is a practical problem please tell me) and for TCP the MTU is negotiated anyway. if there is a problem I can move to a dedicated VLAN, but this has it's own shares of problems as basically most workstations must then be on both VLAN's.

    Read the article

  • How to test server throughput

    - by embwbam
    I've always used apache benchmark to try to get a rough idea of how many requests/second my server can handle. I read that it was good, and it seemed to work well. Enter node.js, which is fully event-based, so it never blocks. If I run apache benchmark on a simple hello world server it can handle 2500 requests per second or so. However, if I put a timeout in the hello world function, so that it responds after 2 seconds, apache benchmark reports a dramatically reduced throughput: about 50/s. I'm running 100 concurrent connections with ab. If I increase the concurrency, it goes up. This makes sense, because apache benchmark is basically sending out requests in batches of 100, which come back every 2 seconds. 100 requests / 2 seconds = 50 requests / second If I increase the concurrency to about 400 or 500, it starts to crash. I don't think I've hit node.js's limit, I think I'm hitting a wall in my operating system on the number of open file descriptors or sockets or something. Any way I can get a good guess about how many requests my server can handle? I want to make sure the test computer isn't the one causing the problem.

    Read the article

  • Inconsistent SMTP Access

    - by Mike Hanson
    I have a mail server setup on Windows Server 2008. All was working fine, until I wanted to map a drive on the server so that I can access files on another machine. Windows prompted me to configure Network Discovery, which I did with the "Home/Office" option rather than "Public". After that, several access points that worked before stopped working, like VNC, SMTP, etc. After reinstalling those packages, things appeared to be working again. Unfortunately, problems have returned with my SMTP server. I can use an web-based SMTP tester, and it connects in 62msec (as expected). However, if I telnet from my machine on the same LAN, it takes more than 20 seconds to connect! When I try to send messages from Outlook, it times out entirely with the message: Sending' reported error (0x80042109) : 'Outlook cannot connect to your outgoing (SMTP) e-mail server. If you continue to receive this message, contact your server administrator or Internet service provider (ISP).' I've checked the firewall settings, I've tried configuring it to use port 587 instead of 25, but nothing gets around this problem. Does any have any useful insights? Thanks in advance!

    Read the article

  • Looking for a recommendation on measuring a high availability app that is using a CDN.

    - by T Reddy
    I work for a Fortune 500 company that struggles with accurately measuring performance and availability for high availability applications (i.e., apps that are up 99.5% with 5 seconds page to page navigation). We factor in both scheduled and unscheduled downtime to determine this availability number. However, we recently added a CDN into the mix, which kind of complicates our metrics a bit. The CDN now handles about 75% of our traffic, while sending the remainder to our own servers. We attempt to measure what we call a "true user experience" (i.e., our testing scripts emulate a typical user clicking through the application.) These monitoring scripts sit outside of our network, which means we're hitting the CDN about 75% of the time. Management has decided that we take the worst case scenario to measure availability. So if our origin servers are having problems, but yet the CDN is serving content just fine, we still take a hit on availability. The same is true the other way around. My thought is that as long as the "user experience" is successful, we should not unnecessarily punish ourselves. After all, a CDN is there to improve performance and availability! I'm just wondering if anyone has any knowledge of how other Fortune 500 companies calculate their availability numbers? I look at apple.com, for instance, of a storefront that uses a CDN that never seems to be down (unless there is about to be a major product announcement.) It would be great to have some hard, factual data because I don't believe that we need to unnecessarily hurt ourselves on these metrics. We are making business decisions based on these numbers. I can say, however, given that these metrics are visible to management, issues get addressed and resolved pretty fast (read: we cut through the red-tape pretty quick.) Unfortunately, as a developer, I don't want management to think that the application is up or down because some external factor (i.e., CDN) is influencing the numbers. Thoughts? (I mistakenly posted this question on StackOverflow, sorry in advance for the cross-post)

    Read the article

  • How do I setup routing for 2 companies with different Internet connections on the same LAN?

    - by Clint Miller
    Here's the setup: 2 companies (A & B) share office space and a LAN. A 2nd ISP is brought in and company A wants it's own Internet connection (ISP A) and company B wants it's own Internet connection (ISP B). VLANs are deployed internally to separate the 2 company's networks (company A: VLAN 1, company B: VLAN 2, shared VOIP: VLAN 3). With separate VLANs it's simple enough to use separate DHCP servers (or separate scopes on the same server) to assign the default gateway to each company's gateway for their Internet connection. Static routes can be created on each gateway to point traffic destined for the other company's VLAN or the voice VLAN so that all nodes are reachable as expected. However, I think this is a form of asymmetrical routing, right? (The path from node A1 to node B1 is not the same as the path back from node B1 to node A1). Can I setup policy-based routing to correct this? In that case, can I assign the same default gateway to every device on all VLANs and create a routing policy on a L3 switch to look at the source address and forward traffic to the appropriate next hop? In that case, I want the routing logic to go like this: If the destination address is known, forward the traffic (traffic destined for a different VLAN). If the destination address is unknown, forward the traffic to ISP A's gateway if the source address is on VLAN A; or forward the traffic to ISP B's gateway if the source address is VLAN B. Am I thinking about this problem in the correct way? Is there another way to solve this problem that I am overlooking?

    Read the article

  • Do I need to restart my server after editing fstab and mtab?

    - by jaypabs
    I'm just wondering if I need to restart my server after editing fstab and mtab. I changed something in this file manually due to problem with awstats report. I am using ISPConfig 3 with the help of the tutorial from howtoforge. But due to removing/deleting of some account, the configuration of fstab and mtab messed up. I also ask this question at howtoforge forum but until now no one has answered. If you'd like to read my question please visit it here. I tried very hard to fix the problem w/o luck. Update: Here's what happen to my fstab: Before the value was (I omitted the other): /var/log/ispconfig/httpd/mydomain.com /var/www/clients/client1/web1/log none bind,nobootwait 0 0 /var/log/ispconfig/httpd/example.com /var/www/clients/client1/web2/log none bind,nobootwait 0 0 So I changed it to the correct path: /var/log/ispconfig/httpd/mydomain.com /var/www/clients/client1/web2/log none bind,nobootwait 0 0 /var/log/ispconfig/httpd/example.com /var/www/clients/client1/web3/log none bind,nobootwait 0 0 I also found mtab to have the same value as above that's why I edited it manually. from: /var/log/ispconfig/httpd/mydomain.com /var/www/clients/client1/web1/log none rw,bind 0 0 /var/log/ispconfig/httpd/example.com /var/www/clients/client1/web2/log none rw,bind 0 0 to: /var/log/ispconfig/httpd/mydomain.com /var/www/clients/client1/web2/log none rw,bind 0 0 /var/log/ispconfig/httpd/example.com /var/www/clients/client1/web3/log none rw,bind 0 0 I edited those value because the correct path of mydomain.com and example.com should be under web2 and web3 folder respectively. As of now the log of example.com is pointed to: /var/www/clients/client1/web2/log when it should be: /var/www/clients/client1/web3/log So I am thinking that this is because of fstab and mtab. Please guide me how to point the log correctly to it's default directory. I explain the scenario one by one at this link. to dawud: Based on your example mount -o remount,noexec /var, should I run mount -o remount,noexec /var/log/ispconfig/httpd/example.com?

    Read the article

  • How does Windows 7 taskbar "color hot-tracking" feature calculate the colour to use?

    - by theyetiman
    This has intrigued me for quite some time. Does anyone know the algorithm Windows 7 Aero uses to determine the colour to use as the hot-tracking hover highlight on taskbar buttons for currently-running apps? It is definitely based on the icon of the app, but I can't see a specific pattern of where it's getting the colour value from. It doesn't seem to be any of the following: An average colour value from the entire icon, otherwise you would get brown all the time with multi-coloured icons like Chrome. The colour used the most in the image, otherwise you'd get yellow for the SQL Server Management Studio icon (6th from left). Also, the Chrome icon used red, green and yellow in equal measure. A colour located at certain pixel coordinates within the icon, because Chrome is red -indicating the top of the icon - and Notepad++ (2nd from right) is green - indicating the bottom of the icon. I asked this question on ux.stackoverflow.com and it got closed as off-topic, but someone answered with the following: As described by Raymond Chen in this MSDN blog article: Some people ask how it's done. It's really nothing special. The code just looks for the predominant color in the icon. (And, since visual designers are sticklers for this sort of thing, black, white, and shades of gray are not considered "colors" for the purpose of this calculation.) However I wasn't really satisfied with that answer because it doesn't explain how the "predominant" colour is calculated. Surely on the SQL Management Studio icon, the predominant colour, to my eyes at least, is yellow. Yet the highlight is green. I want to know, specifically, what the algorithm is.

    Read the article

  • Is it possible to have a conditional formatting cell "visually cycle" through all the formats that evaluated true?

    - by Ben
    Like the title says, "In Excel, when a cell has multiple conditional formatting rules that evaluate true, is it possible to have the cell "visually cycle" through all the formats that evaluated true? If not, suggestions on what to do would be appreciated!" I'm creating an employee schedule for a business that has multiple job areas that need to have an employee assigned to cover. The schedule is currently set up with the date on the top row, employee list down the left column, and the employee's assigned "job area" cross-referencing with the date on the top row. Originally it was set up where if every required "job area" didn't have someone assigned to it, the date would (via conditional formatting) change to red. I've set it up now that if a condition isn't met, the date will change to the color of the "job area" that doesn't have an employee assigned to it. However, there are cases where multiple job areas don't have an employee assigned, but the date will only change color based on the first condition that isn't met. It'd be nice if there was some way for the date cell to cycle through the different colors that correspond to the job areas where no one is assigned. I have a hunch that's not possible though. If it is possible, I'd love to know how to do it. And if it isn't, if anyone has any suggestions on how I can modify the Excel sheet to make it easier to identify the job areas that don't have anyone assigned to them, I would appreciate it. FYI This schedule goes out months in advance.

    Read the article

  • open-source LAMP package for simple accouting and receiving?

    - by user26664
    Hello. I am involved with a computer-based charity where we take donation of old equipment, often recycle it, mostly rehabilitate it and make it available through grants and 'adoptions', and sell some items. What we're looking for is a LAMP package that can handle our records of donations of equipment, print receipts for donators, and also track our thrift store sales with receipts. Donators will want receipts of their donations for tax deduction purposes. We'll want to print reports of incoming items from time to time, say monthly and yearly. For the thrift store, we'll also need receipts for that, reports of cash, especially for reconciling cash drops, and also reports of items sold from time to time. I'm thinking this might be a single package, but it might be two. We don't want to track our shop inventory with either of these programs -- that's another program/project. We just need to know what was donated and what was sold. It must be open source, and ideally we'd like it to run on LAMP -- Linux, Apache, MySQL, PHP, but we will consider other open-source platforms.

    Read the article

  • Can you transfer a Windows 7 XP-mode VHD to a new PC without the need for reactivation?

    - by Henk
    Hi. At the moment I am running Vista 32 bit and use VPC 2007 a lot. I have several W2000 VMs set up with complex software configurations. For me, one of the advantages of working with VPCs is that a VM can easily be copied to another PC (like a notebook). Also, when I move to a new PC, I can immediately work with my existing VMs without the need to reinstall all applications. I am thinking about buying a new pc with Windows 7 64 bit pro, and using the xp-mode VM that comes with Windows 7. The advantages would be support for more memory (so being able to run more VM's simultaneously) and having a licenced XP VM. I would like to know if you can transfer a VHD that is based on the xp-mode VHD to another machine without Activation issues. The other machine will of course, also be running W7. I would guess that this would not be a problem. However, I did read about a guy who had to reinstall his W7 because of a harddisk failure, and could not access his xp-mode VHD anymore because it demanded re-activation. Thanks. Henk.

    Read the article

  • virtual memory commited

    - by vinu
    After a server bounce happens, and after around 40-45 days time period, we receive continuous “Committed Virtual Memory” alerts which indicates the usage of swap space in the magnitude of 4GB This also causes the application to perform very slowly and experience a number of stalled transactions. Server Setup: 4 Tomcat Servers (version 7.0.22) that are load balanced (not clustered) by 2 Apache Servers. And the Apache servers themselves supply static content and routing to these 4 tomcat servers. Java Runtime Version: java version "1.6.0_30" Java(TM) SE Runtime Environment (build 1.6.0_30-b12) Java HotSpot(TM) 64-Bit Server VM (build 20.5-b03, mixed mode Memory Startup Parameters: MEMORY_OPTIONS="-Xms1024m -Xmx1024m -Xss192k -XX:MaxGCPauseMillis=500 -XX:+HeapDumpOnOutOfMemoryError -XX:MaxPermSize=256m -XX:+CMSClassUnloadingEnabled" Monitoring – Wily monitoring is available in all the production servers that monitors key server parameters and sends out configurable alert emails based on pre defined settings. Note: Each of the servers also has two other separate tomcat domains that run different applications Investigated area: There is no Heap Memory Leak and the GC is running fine without any issues over any period of time The current busy thread count corresponds directly to the application usage – weekends and nights have lesser no. of threads compared to business hours ThreadLocal uses a WeakReference internally. If the ThreadLocal is not strongly referenced, it will be garbage-collected, even though various threads have values stored via that ThreadLocal. Additionally, ThreadLocal values are actually stored in the Thread; if a thread dies, all of the values associated with that thread through a ThreadLocal are collected. If you have a ThreadLocal as a final class member, that's a strong reference, and it cannot be collected until the class is unloaded. But this is how any class member works, and isn't considered a memory leak. The cited problem only comes into play when the value stored in a ThreadLocal strongly references that ThreadLocal—sort of a circular reference. In this case, the value (a SimpleDateFormat), has no backwards reference to the ThreadLocal. There's no memory leak in this code. Can anyone please let me know what could be the cause of this and what to be monitored?

    Read the article

  • How To Find Reasons of Why Site Goes Online/Offline

    - by HollerTrain
    Seems today a website I manage has been going online and offline throughout the entire day. I have no idea what is causing the issue so I am seeking guidance on where to start. It is a Wordpress based site. So here is what I DO know: I use a program that pings the server every minute and when the server is not responding me it emails me, so I can know exactly when the site is online and offline. The site between 8pm to 12pm 12.28, and around the 1a hour early morning 12.29 (New York City timezone, and all times below are in same timezone). At the time of the ups/downs I see a lot of strain on the memory usage. Look at the load average when the site is going online/offline (http://screencast.com/t/BRlfXkqrbJII). Then I ran this command to restart http (http://screencast.com/t/usVtYWZ2Qi) and the memory usage then goes down to this (http://screencast.com/t/VdTIy3bgZiQB). An hour after I restarted http, the site then went offline/online so restarting the http didn't do much help. When the site is going offline/online, I ran the top command and get this (http://screencast.com/t/zEwr7YQj3). Here is a top command when the site is at it's lowest (http://screencast.com/t/eaMfha9lbT - so this would be dubbged "normal"). Here is a bandwidth report (http://screencast.com/t/AS0h2CH1Gypq). The traffic doesn't seem to be that much (http://screencast.com/t/s7hrWNNic1K), but looking at my times the site is going up/down this may be one of the reasons? I have the dvp Nitro package at Media Temple (http://mediatemple.net/webhosting/nitro/). So at this point I would request some help in trying to figure out what the cause of this is, and how I can go about pinpointing this issue. ANY HELP is greatly appreciated.

    Read the article

  • Cross OS data recover question, USB drive involved.

    - by Moshe
    Here's the story: A MacBook had OS X 10.4 and Windows XP dual booting using rEFIt. Then the Windows partition gets corrupted and it won't boot. Presumably a virus. There were sensitive files there and those were successfully copied to a USB drive and then 10.5 was installed on the hard drive, formatting the drive in the process. The USB drive's contacts cracked and he data is lost from there, unless it can be resoldered. The issues is that there is too much solder there already. So, how can the data in question be recovered? The files were Microsoft Money (not the latest version) files for the Windows version of the program. Right now, only OS X is installed on the MacBook. Is there Mac based program that can recover the Windows data or am I better off trying to resolder the drive? Does anyone know how to best resolder a USB drive more than once, where the first solder is ther, but detached from the silicon? Also, what format (extension) are Microsoft Money files? In need of help!

    Read the article

  • Dell R320 RAID 10 with CacheCade

    - by Geekman
    I'm looking for a higher-performance build for our 1RU Dell R320 servers, in terms of IOPS. Right now I'm fairly settled on: 4 x 600 GB 3.5" 15K RPM SAS RAID 1+0 array This should give good performance, but if possible, I want to also add an SSD Cache into the mix, but I'm not sure if there's enough room? According to the tech-specs, there's only up to 4 total 3.5" drive bays available. Is there any way to fit at least a single SSD drive along-side the 4x3.5" drives? I was hoping there's a special spot to put the cache SSD drive (though from memory, I doubt there'd be room). Or am I right in thinking that the cache drives are simply drives plugged in "normally" just as any other drive, but are nominated as CacheCade drives in the PERC controller? Are there any options for having the 4x600GB RAID 10 array, and the SSD cache drive, too? Based on the tech-specs (with up to 8x2.5" drives), maybe I need to use 2.5" SAS drives, leaving another 4 bays spare, plenty of room for the SSD cache drive. Has anyone achieved this using 3.5" drives, somehow?

    Read the article

  • Loss of network connectivity when playing video on Optoma HD180 projector

    - by Jeff Fohl
    Hi Folks - New to Super User, so I hope this question fits in with the guidelines. Very strange problem I am having, and I am at a loss as to how to continue troubleshooting this one. The basic problem is that when I attempt to watch streamed video on a particular display device (an Optoma HD180 projector), my network connectivity drops like a stone to barely measurable levels. This is my setup: I have a Dell H2C 730x running Windows 7 64bit. This particular computer has two ATI Radeon HD 4800 video cards. I have two Samsung 22" monitors connected to one card, and an Optoma HD180 digital projector connected to the other card via an HDMI cable. My internet connection is normally a reliable 6Mbps. The problem I am having occurs when I stream video (or even just browse the web) on the Optoma Projector. When I do this, my internet connection drops to practically zero (just a few kilobits per second). When I move the browser away from the projector, and over to one of my Samsung monitors, the internet connection comes right back. Note that the Optoma projector is on and enabled as a third monitor all this time. I can move the mouse around on the projector without triggering the problem. I tried pinging my router when I was playing a movie on one of the monitors, and I get a 1 millisecond response. However, when I have the movie playing on the Optoma projecter, pinging the router gives me response times in the hundreds of milliseconds, or times out completely. So, it clearly is something local to my machine - and not some sort of throttling occurring down the line. I would think that it is possibly something to do with the HDMI driver conflicting somehow with my network driver (which is a USB-based wireless connection). This one has me really stumped. Anyone have any ideas?

    Read the article

  • Testing realistic loads for new versions of existing web app

    - by David Cournapeau
    Assuming I have a relatively complex web application, I am interested in testing performances of a new version using a traffic as realistic as possible. Traffic is relatively complex (session-based, lots of internal logic which depends on incoming requests). The webapp depends on many servers (databases, frontends, etc...). I can think of two basic directions: Recording every incoming request with its timestamp in production in a centralized manner and replaying it from N clients to reproduce a load as close as possible as the original. Issue: because we have many servers, getting the centralized log is not trivial. having a system duplicating requests to a staging area so that I could "plug" a dev version of my webapp to it at anytime without affecting the production. Issue: I have not found much information about it expect this, which suggests to me that may not be the best solution. OTOH, it is realistic by definition. What is the standard way of doing this kind of testing ? I did not find much information about load testing with complex, realistic traffic.

    Read the article

  • Configure one IIS site to handle two separate SSL certificates using external Load Balancing or SSL Acceleration Servers

    - by bmccleary
    I have one web application on our server that needs to be referenced by two different domain names, both of which have their own SSL certificates. The application is exactly the same for both domains, but we have to keep the two domain names for legal reasons. The problem is that, since both domains need to have their own SSL certificate, that inside of our IIS 7.5 configuration we have to have two separate IIS applications (both pointing to the same physical location) with their own unique IP address and SSL certificate installed. Now, I know that, due to the nature of SSL communications, that this is by design and that you can't assign more than one SSL certificate per IP address and domain name. My question is… is there any way around this limitation and keep one web application in IIS and have it service two SSL certificates based on host name? I know that with the basic IIS configuration that this is not possible, but I was thinking that with some sort of combination of external load balancing and/or SSL acceleration servers/services that we could have these servers process the SSL request and leave IIS clean to have one single application. I am not familiar at all with these technologies, hence the reason I am asking if it is theoretically possible. If not, does anyone else know how to achieve this?

    Read the article

  • How to move or delete files from a folder containing 2 million files on an NTFS drive?

    - by Beau
    The issue is that any modification to the directory locks up Explorer indefinitely, though Samba access to other directories still works. I've tried moving files locally and over Samba. Even enumerating the directory to get the list of files locks up the computer indefinitely. I tried using Python's win32file.FindFilesIterator to iterate the files but that also hangs. My idea was to move each file to a different directory (in a directory above the directory we're dealing with) based on its timestamp, so that we'd have at most a thousand or so files in each directory... But since I can't even enumerate the files, that's been a non-starter. If I have to give up and just nuke the directory I'm willing to do that, but a standard delete also hangs indefinitely. I have set these two parameters to increase speed and they also did not help the issue: R:\>fsutil behavior query disablelastaccess disablelastaccess = 1 R:\>fsutil behavior query disable8dot3 disable8dot3 = 1 These are all sequential images that would have run into the 'bug' with 8.3 filenames whereby many similarly named files in one directory can take a long time to compute 8.3 filenames. From what I understand this data is stored in the file system even after disable8dot3 is enabled, so it may still be contributing to the problem. Any ideas?

    Read the article

< Previous Page | 794 795 796 797 798 799 800 801 802 803 804 805  | Next Page >