Search Results

Search found 28957 results on 1159 pages for 'single instance'.

Page 616/1159 | < Previous Page | 612 613 614 615 616 617 618 619 620 621 622 623  | Next Page >

  • What sort of things can cause a whole system to appear to hang for 100s-1000s of milliseconds?

    - by Ogapo
    I am working on a Windows game and while rendering, some computers will experience intermittent pauses ("hitches" for lack of a better term). When profiled they appear in seemingly random places in the code. Eventually I noticed that it wasn't just my process that was affected, but (seemingly) every process on the system. All of the threads in my application hitch at once. The CPU utilization drops during these hitches and it appears as if most processes make no progress. This leads me to believe this may be an Operating System or Driver issue, but it only occurs while playing the game (and only on some systems). What sort of operations might the operating system be doing that would require the kernel to pause all user threads and block. Some kind of I/O? At first I thought of paging but my impression is that would only affect a single process, no? Some systems in use: Windows, DirectX (3d), nVidia cards (unknown if replicates on ATI), using overlapped io for streaming

    Read the article

  • Unable to sunchronize local and remote directories ("set times: Operation not permitted")

    - by Tom Auger
    I'm running into FTP errors using software like NetBeans or WinSCP: whenever I attempt to perform a synchronization or update of files from local -- server I get errors on the client saying "set times: Operation not permitted". This is clearly an issue with the way I've configured my Fedora installation. The user that I'm logging in with cannot touch -t any of these files, though he IS part of a group that has r/w access on the files. I do have root / sudo access to this server. What I would like to know is: a) is it likely that this problem would be solved by allowing my FTP user to "touch -t" these files b) how do I enable a certain user to be able to set timestamps on files without giving them ownership of the files (certain of these files need to be owned by Apache, for instance, so I don't want to chown them). Thanks in advance.

    Read the article

  • How to go about designing an intermediate routing filter program to accept input and forward accordingly?

    - by phileaton
    My predicament: I designed an app, written in Python, to read my mail and check for messages that contain a certain digital signature. It opens these and looks for keywords. If the message contains these keywords, certain related functions area executed on the computer. It is a way I can control my computer from my cell phone without being there. I am still in the beginning stages and it can only currently remotely open and close applications/processes. The obvious issue is security risks. I hoped to spearhead that by requiring and checking for that digital signature. However, my issue comes when I'd like to make this program usable by multiple users. The idea is that the user will send keywords: username and password, for instance, to log into their personal email account and send messages to it to be parsed. Please ignore the security implications of sending non-encoded passwords through email. (Though if you could help me on that part I'd much appreciate it as well, but currently, that is not the scope of my question.) My issue is designing an intermediary process that will take an email/password to read an email and scan for those keywords. The issue is, that the program has to be accessing an email to read the email for the username/password! I have got myself into a loop and cannot figure out how to have this required intermediary program. I could just create an arbitrary email account and have that check for login-creds, but is there a better way of doing this than that? Also, is there a better way of communicating with a computer remotely than this? Especially if the computer is not a server and is behind a router with only a subnet ip? If I am asking this question in the wrong place, I deeply apologize. Any help would be much appreciated!

    Read the article

  • How to leverage the internal HTTP endpoint available on Azure web roles?

    - by adelsors
    Imagine you have a Web application using an in-memory collection that changes occasionally, loading it from storage on the Application_Start global.asax event and updating it whenever it changes. If you want to deploy this application on Azure you need to keep in mind that more than one instance of the application can be running at any time and therefore you need to provide some mechanism to keep all instances informed with the latest changes. Because that the communication through internal endpoints between Azure role instances is at no cost, a good solution can be maintaining the information on Azure Storage Tables, reading its contents on the Application_Start event and populating its changes to all instances using the internal HTTP port available on Azure Web Roles. You need to follow these steps to leverage the internal HTTP endpoint available on Azure web roles: 1.   Define an internal HTTP endpoint in the Web Role properties, for example InternalHttpEndpoint   2.   Add a new WCF service to the Web Role, for example NotificationServices.svc 3.   Add a method on the new service to receive notifications from other role instances. 4.   Declare a class that inherits from System.ServiceModel.Activation.ServiceHostFactory and override the method CreateServiceHost to host the internal endpoint.   Note that you can use SecurityMode.None because the internal endpoint is private to the instances of the service, this is provided by the platform. 5.   Edit the markup of the service right clicking the svc file and selecting "View markup" to add the new factory as the factory to be used to create the service    6. Now you can notify changes to other instances using this code:

    Read the article

  • Why do Apache access logs - timer resolution issue?

    - by Rob
    When going through Apache 2.2 access logs, logging with the %D directive (The time taken to serve the request, in microseconds), that it's very common for a 200 response to have a given number of bytes, but a "time to serve" of zero. For example, a given URL might be requested 10 times in a single day, and a 200 response is sent for them all, and all return, say 1000 bytes. However, 7 of them have a "time to serve" of zero, while the other 3 have a time to serve of 1 second. Is this simply because the request was served faster than the resolution of the timer Apache uses?

    Read the article

  • List all documents (webparts) and sites using a certain solution in sharepoint 2007

    - by tnolan
    I would like to uninstall a Sharepoint application template (GroupBoard Workspace to be exact) but I want to make sure nothing currently relies on it. I don't see any functions within stsadm that will tell me this information and I have even tried SPM which would work, but with such a huge site it's tedious to go through every single web and page to see which features are in use. Is there a way (probably with SQL using the id from stsadm -o enumsolutions) to list everything that relies on a template within a given solution, including webparts on custom pages? If this is not possible, what is the best way to check dependencies prior to uninstalling a solution (especially since GBW is not the only one on my axe list.) Note: I know that stsadm -o deletesolution will stop me from removing something that is in use, but I want to see all of the things that are using a given solution.

    Read the article

  • Is there a browser independet bookmarktool supporting tags, date and free comments?

    - by bernd_k
    I am looking for a tool, which helps me to organize my personal bookmarks. I want to be able to assign tags and free comments to a bookmark. I want to search my bookmarks by tags date of bookmarking pattern in title pattern in url It would be nice to be web based to enable sharing my bookmarks between different machines. But for it would be OK, if it works on a single machine as long as it has some import/export way to transfer the links to a new machine replacing the old. As browsers I'm using Firefox and ChromePlus. It would be nice, if the solution works with both browsers. With free comments, I mean additional remarks stored for a bookmark, which is not essential for searching.

    Read the article

  • Set up router to vpn into proxy server

    - by NKimber
    I have a small network with a single LinkSys router connected to broadband in US via Comcast. I have a VPN proxy server account that I can use with a standard Windows connection, allowing me to have a geographic IP fingerprint in Europe, this is useful for a number of purposes. I want to setup a 2nd router that automatically connects via VPN to this proxy service, so any hardware that is connected to router 2 looks as though it is originating network requests in Europe, and any hardware connected to my main router has normal Comcast traffic (all requests are originating from USA). My 2nd router is a LinkSys WRT54G2, I'm having trouble getting this configured. Question, is what I'm trying to do even feasible? Should the WRT54G2 be able to do this with native functionality? Would flashing it with DD-WRT allow me to achieve my objectives?

    Read the article

  • Agressive Auto-Updating?

    - by MattiasK
    What do you guys think is best practice regarding auto-updating? Google Chrome for instance seems to auto-update itself as soon as it get's a chance without asking and I'm fine with it. I think most "normal" users benefits from updates being a transparent process. Then again, some more technical users might be miffed if you update their app without permission, as I see it there's 3 options: 1) Have a checkbox when installing that says "allow automatic updates" 2) Just have a preference somewhere that allows you to "disable automatic updates" so that you have to "check for updates manually" I'm leaning towards 2) because 1) feels like it might alienate non-technical users and I'd rather avoid installation queries if possible. Also I'm thinking about making it easy to downgrade if an upgrade (heaven forbid) causes trouble, what are your thoughts? Another question, even if auto-updates are automatically, perhaps they should be announced. If there's new features for example otherwise you might not realize and use them One thing that kinda scares me though is the security implications, someone could theorically hack my server and push out spyware/zombieware to all my customers. It seems that using digital signatures to prevent man-in-the-middle attacks is the least you could do otherwise you might be hooked up to a network that spoofs the address of of update server.

    Read the article

  • How can I tell which laptop touch-screens work well with a stylus (for drawing/taking notes)?

    - by BlueRaja
    I'm looking for a laptop with a touch-screen and stylus for drawing/note-taking. I've read the difference between the different kinds of styluses, but that's only half the story - what about the touch-screen? How do I know if the touch-screen supports "palm-rejection"? Or if the included stylus is a capacitive stylus or a "Wacom digitizer"? Or if the screen will even support Wacom? How can I tell how accurate the touch-screen is (from my testing, some definitely seem to have higher "resolution" than others)? Is there anything else I should be looking at? I don't see any of this information on, for instance, the Newegg specs page for a laptop.

    Read the article

  • What is the best way to compare vhost traffic?

    - by Bob Flemming
    Recently one of my servers has been subjected to malicious ddos attacks. I have about 12 websites hosted on the server which uses name based v-hosting. I am trying to identify which virtual host(s) are getting bombarded with traffic. I have used tools such as iftop which is good for identifying hosts which are consuming lots of bandwidth, and also apachetop which is useful for identifying which resources are being requested on a single v-host. What I really need is a tool which allows me to see the amount of traffic being received by each v-host in real time so I can easily see which v-host is being targeted. Does such a tool exist?

    Read the article

  • How to use ccache selectively?

    - by Anonymous
    I have to compile multiple versions of an app written in C++ and I think to use ccache for speeding up the process. ccache howtos have examples which suggest to create symlinks named gcc, g++ etc and make sure they appear in PATH before the original gcc binaries, so ccache is used instead. So far so good, but I'd like to use ccache only when compiling this particular app, not always. Of course, I can write a shell script that will try to create these symlinks every time I want to compile the app and will delete them when the app is compiled. But this looks like filesystem abuse to me. Are there better ways to use ccache selectively, not always? For compilation of a single source code file, I could just manually call ccache instead of gcc and be done, but I have to deal with a complex app that uses an automated build system for multiple source code files.

    Read the article

  • Activating ssl on tomcat

    - by toom
    I want to encrypt the http traffic on a tomcat instance via ssl. Therefore I followed the most simplistic approach described on various webpages. But anyway it simply does not work. Here is what I did: "keytool -genkey -alias tomcat -keyalg RSA" and I enterd "changeit" as the password (since this is the defaut chosen by tomcat) Altering $CATALINA_HOME/conf/servers.xml by uncommenting the following line Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" maxThreads="150" scheme="https" secure="true" clientAuth="false" sslProtocol="TLS"/ Restarting tomcat Entering https://localhost:8443 does not work. However, I can still access the page via normal http like http://localhost:8080 The logfile does not contain any suspicious information. What is going wrong here?

    Read the article

  • what does the 'm' unit in munin mean?

    - by nbv4
    I'm using munin as a tool for monitoring my servers. On some of the graphs, the units are marked with a 'm'. For instance, my apache accesses graph is labeled 100m, 200m, 300m, along the y-axis. What does the 'm' mean? I understand 'M' (caps) is mega as in megabytes, the 'k' is kilo, the 'G' is giga, but what about 'm'? At first I thought it was million, but theres no way apache is serving 100 million accesses even per decade.

    Read the article

  • Can I use static routing to allow me to use my public IP from my LAN?

    - by jnm2
    I would like to be able to use the same hostname to connect to my computer from my phone whether I'm at home or away. Currently I have to maintain duplicate entries for remote desktop, for instance. My router doesn't seem to have a NAT loopback option. I have two routers in fact, a cable modem which goes straight to my main router which does wireless. I can add to the static routing tables on each. Can I use this to loopback the public IP or do I need different routers?

    Read the article

  • Is it preferable to use POP or IMAP to check multiple google mail accounts from one?

    - by Adam Tuttle
    I have email accounts on several domains that use Google Mail, and thus far I've only ever used POP to send and receive mail from a single inbox. This is quite functional, and as long as I remember to select the appropriate FROM address when starting a new thread, mostly works without any additional thought: messages received on account A are replied to via account A, and so on. My only complaint is the lag -- I've seen sometimes as much as a half hour between messages arriving in an inbox before being imported into my primary inbox via POP. My question is this: Google also supports IMAP. Would that be in any way preferable over POP access? Reduced lag would be nice, but not at a general speed cost, if everything I do has to check another mailbox too.

    Read the article

  • Full Text Search Strategy For My Website

    - by Hosea146
    I have a website that allows users to search for items in various categories. Each category is a separate area (page) of my website. For example, some categories might be cars, bikes, books etc. At the moment a user has to search for an item by going to the page (for example, cars) and searching for the car they want. I would like to allow the user to search for anything on my site, from my main home page. At the moment, each page (category) has its own set of tables, and I don't really want to turn Full Text Search on for each table (20+ of them) and search each table individually when a search is done. This is going to be slow and tedious. What I'm thinking of doing is creating a single table that will hold all searchable information for each category of item (when an item is saved in its respective table, I would copy all searchable information over to my 'Search' table). I would then turn Full Text Search on for that table, and search that table. Does this sound reasonable? Is there a better way? I've never used Full Text Search before, so this is new to me.

    Read the article

  • Memcached clustered alternative

    - by Johan Kooijman
    I'm looking to replace memcached. We have a LOT of traffic to our central memcached node which I'd like to split. There's only so much trunking networks I can do. My general idea is to install a memcached-type daemon on every webserver and have the daemons replicate set/delete/updates over all the daemons, so that each webserver connects to a socket or on localhost. All data should be available on all nodes. The alternatives: - repcached (max 2 masters) - redis (single master) - couchdb/mongodb/handlersocket - persistent data on disk, I'd like to remove the disk part to gain more performance. Any hints?

    Read the article

  • Hard-drive will randomly fail to load GRUB. Booting a live USB/CD fixes the issue temporarily

    - by Usagi
    I am running 12.04 64-bit and am dual booting with Win7, for full disclosure, although I suspect that has nothing to do with my problem. Occasionally the boot-loader(GRUB) will fail to load and I will be presented with a black screen with a single blinking line. There is no apparent pattern although I suspect there is one and it is related to a program I am running. This has happened to me eight out of ten power cycles now and I can fix it consistently, however, I have no idea why it happens. My current fix is to boot a live CD (I've tried both KNOPPIX and Ubuntu with the same result) and that's it. Somehow booting with the live CD is enough to "wake-up" my hard drive. I then reboot and GRUB magically appears again. So what is going on? Is it possible that a program is corrupting my MBR and the live CD is restoring it? How can I narrow down the possibilities? Thanks. Additional: This is still a problem. I'm convinced now that it is not hardware related as I've spent the last month and several boot cycles on Windows without a hiccup. Recently when I started using Ubuntu again the problem started again. I am more interested in figuring out what is going on rather than actually fixing the problem. Are there any tools, logs, etc. I can use to unravel this mystery?

    Read the article

  • Is there a feature in Nagios that allows Memory between checks?

    - by Kyle Brandt
    There are various instances where there are values I want to monitor with Nagios, and I don't care as much about the value itself, but rather how it compares to the previous value. For instance, I wrote one to check the fail counters in OpenVZ. In this case, I didn't care about the value that much, but rather I cared if the value increased. Another example might be switch ports, I would be most interested to get alerted about the change of state of a port (Although perhaps a trap would be better for this one). For my OpenVZ script, I used a temp file, but I am wondering if there is a better way? Maybe Nagios has some variables that plugins (check scripts) can access that are persistent across checks?

    Read the article

  • Simple Central Storage for HA mail server

    - by jtnire
    Hi Everyone, I will have 2 Postfix servers. One will be a backup of the other. What is the easiest method to provide central storage to both of these boxes? My infrastructure is very simple: Just a lot of Xen hosts, so there is no SAN or anything. Each Xen host does have RAID1 though. I don't mind mounting NFS shares on each of those mail servers, as long as the NFS server wasn't a single point of failure. Is there such a thing as redundant NFS? Any help would be appreciated Thanks

    Read the article

  • Fix X11 forwarding on OSX

    - by Such
    I am looking for a way to fix/debug a X11 forwarding session on OSX. Here is my situation: From my mac I connect to a Ubuntu workstation with ssh -X (tried ssh -Y as well). X11 forwarding works perfectly with firefox for instance, X11/Quartz is started automatically on OSX and firefox is displayed. X11 forwarding does not work with bat (Bacula graphical console): X11 is started but no window is displayed. There are no errors (/private/var/log/system.log). When I try doing the same from another Ubuntu workstation, it works perfectly for both firefox and bat. I guess the problem is on OSX side then. I tried switching some options in X11 but nothing works. Would you have any idea on how to move forward? Thanks!

    Read the article

  • apache port number

    - by user983223
    For each development sites I want to have a unique port number. For instance, domain.com:1234 This is what I have in my httpd.conf file. After restart the page domain.com:1234 is not showing in the browser. Is there anything else that I need to do besides what I have already done to make this work? Listen *:1234 <VirtualHost *:1234> DocumentRoot /var/www/dev_sites/test ServerName domain.com:1234 </VirtualHost> It looks like if I go to my local hostname (kk.local:1234) it shows. Is there some sort of dns that I need to do? I really don't want to go into godaddy everytime I add a development site. Is there a way around that?

    Read the article

  • Touchscreen on KDE and Ubuntu?

    - by The Quantum Physicist
    I just bought a Lenovo Yoga 2 Pro... I liked the activity of the touchscreen on Windows, and it makes sense as it does on my smart phone. However, I'm not a regular windows user, so I installed Kubuntu 14.04, and everything looks fine, except that the activity of the touchscreen is so silly that it's useless. Why? Because all the touchscreen does is a single mouse with left click. For example, if I touch the screen for a relatively long time, I don't get the effect of a right click. How do I configure the touchscreen properly to get the activity expected on Ubuntu and KDE? Thanks for any efforts.

    Read the article

  • How get OSX Lion to save Modifier Key Settings (i.e. swap Ctrl and Cmd)

    - by Huliax
    I use Lion at work with an MS Natural Ergonomic Keyboard 4000. Every single time I log in I have to go into settings and swap the command and control keys. This is really annoying. Is there a way to get these settings to stick? Beyond that, I'd also like to remap a few other keys and I'm interested in tools for doing that. I think I need to work out the first issue first though. Thanks for any help.

    Read the article

< Previous Page | 612 613 614 615 616 617 618 619 620 621 622 623  | Next Page >