Search Results

Search found 29920 results on 1197 pages for 'software tools'.

Page 327/1197 | < Previous Page | 323 324 325 326 327 328 329 330 331 332 333 334  | Next Page >

  • Linux file structure

    - by morpheous
    ok. I asked this question earlier(as part of another question) and got no response - so here it is again: what i the recommened directory for me to store the following: 1). my apps 2). development tools (C++ tools) 3). AMP applications for LAMP stack (Apache, MySQL, PHP) 4). files for websites that I develop on my machine - e.g. website1, website2 etc ...

    Read the article

  • USB device goes to "Unknown device" randomly

    - by ILovePaperTowels
    We have a kiosk that uses a bio scanner from m2sys (usb device). It scans your palm to recognize you. Every so often, maybe 1-3 times a day, the bio scanner will become an unknown device. We are unable to see any patterns or commonalities. When we unplug and plug it back in, it becomes available again. We have custom software that uses the bio scanner's software to communicate with it. We've added a crap load of logging on everything but there doesn't seem to be any pattern of when the thing shuts off. We have these devices deployed to multiple locations (100+) and they are all seeing the issues but we can not reproduce it here, at the main office. I've evaluated the software and I don't see anything. I'm thinking it's a driver or hardware issue (but we can't reproduce the issues here in the main office) or maybe environmental interference of some sort like from scan guns, automatic doors, microwaves or something else. Any ideas would be welcome. I'm looking for possible causes of the usb device becoming unknown or ways to figure out what the cause is. no other usb devices have this issue, only the scanner We've contacted the manufacturer and they blame our software We're getting help from Microsoft, but they haven't found anything OS is embedded XP http://www.m2sys.com/palm-vein-reader.htm

    Read the article

  • Remote Desktop Printing With Color

    - by philibertperusse
    Our ACCPACC administration software runs on an off-site dedicated hosted computer, running Windows 2003 Server with a completely different NT DOMAIN. We have many users connecting to that computer remotely to perform administrative tasks such as printing cheques, printing invoices, printing POs, packing slips and so on. Basically the setup is that we are all connecting using Remote Desktop Protocol (local computers are Mac OS.X, XP SP3, Vista and Windows 7). At our office we have a DOCUCOLOR 242 printer. When printing from the ACCPACC software, it prints to the local printer in our office. This is because we are using RDP features to connect printer ressources to the remote computer. This almost works now. I had to install the printer driver software on the remote 2003 Server for the printer sharing to work. Now, everyone is able to print black and white but color is out. NOTES: Normal users on that Windows 2003 server are running as part of a Group Policy Object to restrict what can be done. I took one of these normal users and gave him all domain administrator rights, no effect still B&W only. I took this account and moved it OUT of GPO policies, as a normal account instead, no effect still B&W only. It seems only MY account (which is domain administrator AND a normal account not part of the GPO objects) can actually print with color. This is the account that was used for installing the printer driver software. How can I manage to get everyone to print in color? Any suggestions as to what to try next?

    Read the article

  • Problem setting up HP PhotoSmart C4783

    - by ProfKaos
    I am trying to get several PC's on a network to connect to my client's new wireless printer, the C4783. I initially installed the whole HP 'tsumami of software' the comes with the printer onto my laptop, connected to the printer over USB, and corrected it's networking config. My laptop now also can connect to and print on the printer over the wireless network, as is intended for all PC's. However, when I try and install the HP software on my client's netbook, the HP software cannot find the printer. Firewall was turned off during this time, so doesn't play a role. I tried re-installing all software, but I initially only got a "Repair" or "Uninstall". Choosing Uninstall triggers a lengthy process that ends with a sudden and forceful restart, after which the same "Repair" or "Uninstall" choices are present. Looks like uninstall doesn't. Any suggestions as to how I can begin diagnosing why my laptop has no problem connecting to the printer, but two others do? Wireless is unsecured, and no, nobody has to know where I am right now. :-)

    Read the article

  • Unable to communicate with EWS from Exchange Server

    - by kschieck
    We are currently running a 2 server exchange environment with Edge services on their own. We are in the process of trying to deploy a piece of software that uses the EWS API which has brought me to this form, the software ties into the EWS service and uses it to forward messages (this is failing). Using the software error logs I have found that accessing EWS from the exchange server is not possible. From my work machine and an external address I can type the following https ://webmail.companyname.com/ews/exchange.asmx and be prompted for a username and password, once I enter credentials I get a screen full of information from services.wsdl. The problem is when I try the same URL from the exchange server and get the credentials prompt I cannot get past it. Even with the same credentials that work externally and from my desk it just keeps looping around. Capture from software log (11:41:32.6415 000017e4 System.Net.WebException: The request failed with HTTP status 401: Unauthorized.) I have also found the same results when trying https://webmail.companyname.com/ Autodiscover /Autodiscover.xml . Environment Information Server 2008 STD 64bit Exchange 2007 SP1 Purchased Cert – webmail.companyname.com I have also confirmed that all services have the proper internal and external URL’s. Any help would be appreciated.

    Read the article

  • Recording screen-casts on another computer

    - by paleozogt
    We're trying to record the desktops of users using demo versions of our software (this is an in-house lab setup). We need to have the recording happen on a separate computer (just across the room), so that the recording software doesn't interfere with the user. Every screen recording software I've seen will only record what's happening on the computer its installed on; ie, you can't record what's happening on another computer. So it seems I need to cobble together a solution (unless anyone knows of software that will do this). Getting the video to the other computer seems easy enough. I'm using TightVNC with the DFMirage driver on the test computer. The recording computer connects to the test computer with TightVNC and then uses CamStudio to record what's happening. The real problem is how to deal with the audio. We need to record both what the user is saying (through a headset mic) as well as the sounds produced by the test computer. But VNC doesn't transmit audio. :( I'm not sure how to get both audio streams (mic and sounds) over to the recording computer. Any ideas?

    Read the article

  • Zabbix on amazon ec2. Installation

    - by 330xi
    I've read about mikoomi. And I it is not suitable for me. I do not have access to secret key. I want to install and use zabbix server and agent itself on instances. Is it good idea? And here is the problem: during: yum install zabbix-server-pgsql zabbix-web-pgsql zabbix-agent I got: Error: httpd24-tools conflicts with httpd-tools Error: httpd24 conflicts with httpd How to complete installation successfully?

    Read the article

  • Load and performance testing for webapps with JavaScript support

    - by MrG
    Years ago I used OpenSTA to perform load and performance tests. Unfortunately it doesn't support JavaScript, which is a requirement this time. But I remember that it offered great recording possibilities which enabled us to quickly create new test scripts. Please let me which tools you recommend. Free tools are clearly preferred ;)

    Read the article

  • Checking the configuration of two systems to determine changes

    - by None
    We are standing up a replicant data center at work and need to ensure that the new data center is configured (nearly) identically to the original. The new data center will be differently addressed and named than the original and will have differing user accounts, but all the COTS, patches, and configurations should be the same. We would normally ghost the original servers and install those images onto the new machines, however, we have a few problematic pieces of COTS that require we install them outside of an image due to how they capture the setup of the network during their installation and maintain it within their configuration information (in some cases storing it in various databases). We have tried multiple times and this piece of COTS cannot be captured within a ghost image unless the destination machine will have an identical network setup (all the same IPs, hostnames, user accounts, etc across the entire network) as the original. In truth, it is the setup of these special COTS that I want to audit the most because they are difficult to install and configure in the first place. In light of the fact that we can’t simply ghost, I’m trying to find a reasonable manner to audit the new data center and check to see if it is setup like the original (some sort of system wide configuration audit or integrity check). I’m considering using something like Tripwire for Servers to capture the configuration on the source machines and then run an audit on the destination machines. I understand that it will still show some differences due to the minor config changes, but I’m hoping that it will eliminate the majority of the work. Here are some of the constraints I’m working under: Data center is comprised of multiple Windows and Linux machines of differing versions (about 20 total) I absolutely cannot ghost or snap any other type of image of these machines … at least not in their final configuration I want to audit the final configuration to ensure all of the COTS, patches, configurations, etc are installed and setup properly (as compared to the original data center) I would rather not install any additional tools on these machines … I’d much rather run it from a standalone machine or off a DVD Price of tools is important but not an impossible burden, however, getting a solution soon is important (I can’t take the time to roll my own tools to do this) For the COTS that stores the network information, I don’t know all of the places it stores the network information … so it would be unlikely I could find a way in the near future to adjust its setup after the installation has occurred Anyone have any thoughts or alternate approaches? Can anyone recommend tools that would be usable for system wide configuration audits?

    Read the article

  • Options for small windows network setup without dedicated server?

    - by Mitch
    I'm very weak on networking and hope someone can point me in the right direction: I have written some windows client/server software which incorporates a database which is located on a windows server. I have a test installation running at a customer's office where the server has a static IP address. In this case its easy for the clients to access the database because of the fixed IP address. Also, customers with network servers generally have specialist support staff to set up my software, so its not such a problem for me. However I also need to offer the software to customers who have small offices with less than 10 PCs and no dedicated network server. In this case I want the customer to be able to nominate one PC as the database "server" and install my software and have the clients access it. But in this situation I believe the "server" PC may not have a dedicated IP address. Q1: What is the best way to set this up simply and make it work? Can I reliably reference the "server" by using its name, or is there a way to assign dummy fixed IP addresses? Ideally this needs to be workable on small networks running a mixture of XP/Vista/Windows7 as my target market may well have mixed OSes etc. I guess this would be akin to home networking? Many thanks Mitch

    Read the article

  • How to configure networking on an appliance such that it can plug and play on any corporate network?

    - by Joshua Lim
    I had a chance to configure a Moxa NPort device server appliance on my client's network, it was very easy to do so, done in just 2 minutes. Here's what I did:- The Moxa device server had a preset IP address of 192.168.127.254 and subnet mask 255.255.255.0 - http://www.moxa.com/doc/manual/nport/5400/NPort_5400_Series_Users_Manual_v4.pdf Moxa provides a Windows software which I used to "scan" for the device server. It worked like magic! The software returns a list of device servers found. Each device server is identified by MAC address, and by selecting the device server using the software, I can reset the default IP address and subnet mask of that device server! In comparison, during an earlier project, I spent 2 hours trying to get KVM to work for a Windows 7 embedded appliance I'm trying to install in my client's network - http://superuser.com/questions/380305/how-to-configure-windows-7-professional-appliance-pc-on-my-clients-network-usin Prior to that, I have already tried pre-configuring the IP address and subnet mask to the one which my client provided, yet the appliance still can't connect to the client's network! I've also tried cross cable, didn't work either. After KVM worked, I discovered that the network settings were "lost" after I plug the machine into the client's network. Now my question is what can I do to setup my Windows 7 embedded appliance so that it can connect to any network like that the Moxa device server? I tried experimenting this on my network using a Windows machine configured to an IP address of 192.168.127.254 and subnet mask 255.255.255.0, but it doesn't connect to my network that uses 192.168.0.*. :( EDIT: I would like to point out that the Moxa Windows configuration software seems to be able to connect to any Moxa device connected to the network even if it is on a different subnet, as long as the network adapter shows "connected". This is important because the Moxa device has no VGSM port or interface to configure the IP address.

    Read the article

  • Is there a way to obtain the raw image from a Norton Ghost file ?

    - by amo-ej1
    I have a Norton Ghost .gho file, and basicly I want to extract the raw image out of this, for example to use with with tools like dd. (So using any symantec/windows tools is not an option). Is there any open source tool which can deal with these .gho files ? (Note: the.gho file I'm using is a sector based one, and not a file-based one, which implies that things such as Ghost explorer won't work either (see this page)

    Read the article

  • Restricting Access to Application(s) on Point of Sale system

    - by BSchlinker
    I have a customer with two point of sale systems, a few workstations and a Windows 2003 SBS Server. The point of sale systems are typically running QuickBooks Point of Sale and are logged in with a user who has restricted permissions / access (via Group Policy). Occasionally, one of the managers needs to be able to run a few additional applications -- including some accounting software. I have created an additional user for this manager, allowing them to login and access the accounting software. The problem is, it can be problematic to switch users on the system, as QuickBooks takes a few minutes to close (on POSUser) and then reopen (on ManagerUser). If customers are waiting, this slows things down drastically. Since the accounting software is stored on a network drive, it would be easiest if the manager could simply double click something, authenticate against the network drive / domain controller and then the program would launch. When they close the program, the session to the network drive would be lost and the program would no longer be accessible. Is there any easy way to do this? Both users are on a domain and the system is Windows 7. I just don't want to require the user to switch back and forth. In a worst case scenario, they forget to switch back and leave the accounting software wide open.

    Read the article

  • How to troubleshoot when one has no idea where to start?

    - by Chris Walton
    I am looking for hints, tips and answers on how to get started on troubleshooting when: The problem is intermittent The problem could lie literally anywhere - operating system; free source software; my own software developments; purchased software; crumbs on the keyboard; the specific combination of software I am currently running; Maxwell's demon; the little blue men actually running the machine have gone on strike; etc. I have expertise only in a few of the areas that are potential candidates for the cause of the problem. The specific problem I am having is detailed below as an example, but I am not seeking answers to my current problem, but rather where and how to start on tackling such problems. I am currently encountering a problem with my new machine. On a few occasions the machine has just frozen; not accepting keystrokes, mouseclicks, or anything except the power on/off switch. Invariably I have been merely browsing the web; I have had a few (<= 6 other applications) running. None of these applications are major; and represent a mix of commercial programs and open source programs, typically migrated from Unix of some variety. My machine is a Windows 7 I7 quad core laptop.

    Read the article

  • How do I troubleshoot computer dumps?

    - by KronoS
    Once I have a dump of a computer crash/freeze, what are some tools and steps to take in order to troubleshoot crash based off of the dump itself? I am looking for tools to isolate what processes or issues are causing the crash, and also good techniques in troubleshoot the actual dump itself. Once I've determined what the "troublesome" process has been, what do I do to troubleshoot the issue? For example if I determine process foo.exe or bar.dll etc, is the problematic file how do I determine what can be done?

    Read the article

  • Microsoft Home Use Program - use more than one computer

    - by kristof
    I purchased a copy of MS Office through Microsoft Home Use Program (HUP) It basically allows you get a very cheap copy for home use if your employer owns the licence. My question is: Can I install it on more than one PC/laptop at home? I could not find anything in FAQ Thank you EDIT I was installing Office 2010 I found the following in the EULA: MICROSOFT SOFTWARE LICENSE TERMS .... 2 INSTALLATION AND USE RIGHTS. a. One Copy per Device. You may install one copy of the software on one device. That device is the “licensed device.” b. Licensed Device. You may only use one copy of the software on the licensed device at a time. c. Portable Device. You may install another copy of the software on a portable device for use by the single primary user of the licensed device. Here is the full copy of the licence

    Read the article

  • How to recover a USB flash drive

    - by Steve Rowe
    I have a USB flash drive that claims it needs to be formatted every time I put it into my computer (Windows). Yesterday the drive was healthy and had data on it. The data is probalby still there. Are there any free tools to restore the drive? If not free, what tools are known to work in this situation?

    Read the article

  • Differential backup missing moved folders (flawed archive attribute logic)

    - by Max
    Recently I've discovered that my backup system it flawed: there are situation where various files/folders are missed. I do my backup from local disk to a network NAS. I use Cobian backup, and I have setup the backup software to create one full backup every week, and one differential backup every day. Now, the backup software (to my knowledge any backup software work this way) decide the files that go in the differential backup by looking at the file archive attribute. If the attribute is set, then the file go in to the backup. Now, when you move a file to a new location, on Windows systems, the archive attribute get set and the file is included in the backup, and that's fine... but when you move an entire folder, no archive attribute is set, nor on the folder, nor in any files inside the folder, so the moved folder isn't included in the differential backup! So, if you have a full backup plus a differential backup, and you moved folders around... then it's impossible to reconstruct the original files/folders structure starting from the full+differential backup, because the backup software didn't include the moved folders in the differential backup. So my differential backup are useless... Why does windows set the archive attribute when moving a file, but not when moving a folder? How can I deal with this issue? Is there a way to create a differential backup that works as it's supposed to do? Doing full backup every day is not practical, because the changed data is about 0.1% at day (by using a differential backup I can keep 4 weeks of files history without using too much disk space.)

    Read the article

  • Apache Alias subfolder and starting with dot

    - by MauricioOtta
    I have a multi purpose server running ArchLinux that currently serves multiple virtual hosts from /var/www/domains/EXAMPLE.COM/html /var/www/domains/EXAMPLE2.COM/html I deploy those websites (mostly using Kohana framework) using a Jenkins job by checking out the project, removes the .git folder and ssh-copy the tar.gz to /var/www/domains/ on the server and untars it. Since I don't want to have to re-install phpMyAdmin after each deploy, I decided to use an alias. I would like the alias to be something like /.tools/phpMyAdmin/ so I could have more "tools" later if I wanted to. I have tried just changing the default httpd-phpmyadmin.conf that was installed by following the official WIKI: https://wiki.archlinux.org/index.php/Phpmyadmin Alias /.tools/phpMyAdmin/ "/usr/share/webapps/phpMyAdmin" <Directory "/usr/share/webapps/phpMyAdmin"> AllowOverride All Options FollowSymlinks Order allow,deny Allow from all php_admin_value open_basedir "/var/www/:/tmp/:/usr/share/webapps/:/etc/webapps:/usr/share/pear/" </Directory> Changing only that, doesn't seem to work with my current setup on the server, and apache forwards the request to the framework which 404s (as there's no route to handle /.tools/phpAdmin). I have Mass Virtual hosting enable and setup like this: # # Use name-based virtual hosting. # NameVirtualHost *:8000 # get the server name from the Host: header UseCanonicalName On # splittable logs LogFormat "%{Host}i %h %l %u %t \"%r\" %s %b" vcommon CustomLog logs/access_log vcommon <Directory /var/www/domains> # ExecCGI is needed here because we can't force # CGI execution in the way that ScriptAlias does Options FollowSymLinks ExecCGI AllowOverride All Order allow,deny Allow from all </Directory> RewriteEngine On # a ServerName derived from a Host: header may be any case at all RewriteMap lowercase int:tolower ## deal with normal documents first: # allow Alias /icons/ to work - repeat for other aliases RewriteCond %{REQUEST_URI} !^/icons/ # allow CGIs to work RewriteCond %{REQUEST_URI} !^/cgi-bin/ # do the magic RewriteCond %{SERVER_NAME} ^(www\.|)(.*) RewriteRule ^/(.*)$ /var/www/domains/${lowercase:%2}/html/$1 ## and now deal with CGIs - we have to force a MIME type RewriteCond %{REQUEST_URI} ^/cgi-bin/ RewriteRule ^/(.*)$ /var/www/domains/${lowercase:%{SERVER_NAME}}/cgi-bin/$1 [T=application/x-httpd-cgi] There is also nginx running on this server on port 80 as a reverse proxy for Apache: location ~ \.php$ { proxy_pass http://127.0.0.1:8000; } Everything else was setup by following the official WIKI so I don't think those would cause trouble. Do I need to have the alias for phpMyAdmin setup along the mass virtual hosting or can it be in a separate include file for that alias to work?

    Read the article

  • Windows 7 - Intermittently processes will not close when the app closes

    - by Bill Sambrone
    I have a user I am supporting who has the strangest issue. There are 2 problem applications, Word 2010 and a scanning program called ScandallPro. Intermittently (and at least once a day), she will close an app and the underlying process will not close. Both Word 2010 and this scanning software have all the latest updates. There is another user who has the same software that does not have this problem, and has identical hardware. I have formatted and rebuilt the computer for the user who is having the problems. After the rebuild, the machine was fine for a day but the scanning software continues to intermittently keep the process running even after it is closed. This is a problem because she cannot open a new instance of it while the process is still running. There is a boatload of line of business software on this machine, all of which she needs. I believe the Word 2010 issue is due to a misbehaving add-in (there are 2 add-ins, neither of which seem stable), and I think my best bet is to work with the add-in vendor on it. The scanning program staying open is isolated to this 1 user. The only difference between her machine and the other user is that she has Quickbooks, RoboForm, and Adobe Acrobat X Pro. Any ideas of what can be causing this, or other diagnostic steps to try?

    Read the article

  • Steps to diagnose performance bottlenecks on Mac OS X

    - by Dave Cahill
    If you wanted to track down performance issues on a machine running Mac OS X and find out what was causing slowdowns, which command-line or graphical tools would you use, and how would you use them? I'm interested in advice on the best tools, and explanations of how to use them - when a machine slows down or freezes up, I'd like to be able to dig down and understand what's going on, memory / disk / CPU-wise. Thanks.

    Read the article

  • On Windows 2008 R2, how do I back up DHCP if the DHCP .mdb database is always busy?

    - by johnny
    I get this from my backup software. C:\WINDOWS\system32\dhcp\dhcp.mdb : The process cannot access the file because it is being used by another process. C:\WINDOWS\system32\dhcp\j50.log : The process cannot access the file because it is being used by another process. C:\WINDOWS\system32\dhcp\j50tmp.log : The process cannot access the file because it is being used by another process. C:\WINDOWS\system32\dhcp\tmp.edb : The process cannot access the file because it is being used by another process. My questions: Should I be doing a manual backup of DHCP via command line tools or maybe with MMC, Action, Backup before I run my backup? Is the %SystemRoot%\System32\DHCP\Backup directory always kept up to date? (which does get backed up by backup software) I'm answering my own question but the registry key is set up for 3c, 60 minutes, I believe. HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\DHCPServer\Parameters\BackupInterva This is not the included backup software for Windows. It is another product, but I have seen this with every backup software I've ever used.

    Read the article

  • How far should we take the N+N redundancy craziness ?

    - by Brann
    The industry standard when it comes from redundancy is quite high, to say the least. To illustrate my point, here is my current setup (I'm running a financial service). Each server has a RAID array in case something goes wrong on one hard drive .... and in case something goes wrong on the server, it's mirrored by another spare identical server ... and both server cannot go down at the same time, because I've got redundant power, and redundant network connectivity, etc ... and my hosting center itself has dual electricity connections to two different energy providers, and redundant network connectivity, and redundant toilets in case the two security guards (sorry, four) needs to use it at the same time ... and in case something goes wrong anyway (a nuclear nuke? can't think of anything else), I've got another identical hosting facility in another country with the exact same setup. Cost of reputational damage if down = very high Probability of a hardware failure with my setup : <<1% Probability of a hardware failure with a less paranoiac setup : <<1% ASWELL Probability of a software failure in our application code : 1% (if your software is never down because of bugs, then I suggest you doublecheck your reporting/monitoring system is not down. Even SQLServer - which is arguably developed and tested by clever people with a strong methodology - is sometimes down) In other words, I feel like I could host a cheap laptop in my mother's flat, and the human/software problems would still be my higher risk. Of course, there are other things to take into consideration such as : scalability data security the clients expectations that you meet the industry standard But still, hosting two servers in two different data centers (without extra spare servers, nor doubled network equipment apart from the one provided by my hosting facility) would provide me with the scalability and the physical security I need. I feel like we're reaching a point where redundancy is just a communcation tool. Honestly, what's the difference between a 99.999% uptime and a 99.9999% uptime when you know you'll be down 1% of the time because of software bugs ? How far do you push your redundancy crazyness ?

    Read the article

< Previous Page | 323 324 325 326 327 328 329 330 331 332 333 334  | Next Page >