Search Results

Search found 60836 results on 2434 pages for 'system io directory'.

Page 1774/2434 | < Previous Page | 1770 1771 1772 1773 1774 1775 1776 1777 1778 1779 1780 1781  | Next Page >

  • Managing Internal Yum Repository Groups

    - by elmt
    What is the best method for handling yum groups dependencies? For example, take this comps.xml file <comps> <group> <id>production</id> <name>Production</name> <default>true</default> <description>Packages required to run</description> <uservisible>true</uservisible> <packagelist> <packagereq type="default">ssh</packagereq> </packagelist> </group> <group> <id>development</id> <name>Development</name> <default>false</default> <description>Packages required to develop</description> <uservisible>true</uservisible> <packagelist> <packagereq type="default">gcc</packagereq> </packagelist> </group> </comps> which is packaged with createrepo -g comps.xml x86_64. The ssh and gcc rpms are not installed in the x86_64 directory. If I run yum groupinstall development, yum is smart enough to pull the gcc package from the RHEL repo even though the groups are defined in my internal repository. However, is this the proper way of doing this, or should I copy the rpms to my local repository and recreate the repo?

    Read the article

  • PsExec - what if the .exe you run remotely requires user interaction?

    - by Luca Matteis
    I use PsExec to deploy small .exe files to my workstations. Sometimes the .exe, for some reason, is incompatible with the system and opens a popup window with an error or some other message. Is there any way to know when the executable is 'stuck' there requiring user interaction? On my side I just have psexec running and waiting for the .exe to return (which will never return because the popup is stuck there). Since I run PsExec programmatically and I automate all the executions, when this Popup thing happens I find myself with hundreds of psexec processes stuck there and have no result back from them.. What's the best way to handle this sort of issue? Thanks.

    Read the article

  • Any experience with SATA SAS Interposer Cards?

    - by korkman
    Driven by the current price difference between SATA and SAS disks on one side and the potentially bad behaviour of SATA disks in bigger storage arrays on the other side, I have found so-called SATA-to-SAS interposer cards. Advertised as "seamlessly adding SAS capabilities to existing SATA disk drives", I wonder if anyone here has had some experience with these or similar products. The major benefits I can identify are the increased cable voltage (if all drives are SAS connected), the ability to power-cycle the drive and multipath (if desired). Obviously the SATA drive will still have to be RAID edition. The question is: Do these cards indeed increase the overall reliability of a storage system, or will failing SATA disks cause trouble nevertheless? Edit: I'm not asking for hypothetical answers, only actual experience please. I'm well aware that the typical 10k SAS drive is more reliable (and better performing) than 7200 SATA drives. But how does a nearline SAS, which is phyiscally the same disk as its SATA counterpart, compare to the SATA version with interposer?

    Read the article

  • Moving Exchange .EDB and .STM file to other partition

    - by Jorge Fernandez
    Im trying to move my exchange mailbox store to a new partition and i keep running into an error message saying: "cannot copy insufficient system resources exist to complete the requested service." The server is a Dell Poweredge 2850 with Dual Xeon Processors @ 3.00GHz and 4GB of ram. Running Win Server 2K3 R2 SP2 with Exchange 2K3 Standard. The Store is around 55GB any ideas. I want to get exchange on its on partition since I need to free up some space on the partition its currently on.

    Read the article

  • Running batch file through a service.

    - by wallz
    I'm trying to schedule a batch file to run through a third party application, however the output file doesn't get created in the directory. If I run the .BAT file from the command line, it works and the file gets created. Also using the Windows Schedule will also succeed. Basically, the 3rd party software will schedule the .BAT file and it shows success within the 3rd party user interface. The difference between running from the command prompt and the software, is that the software will use its Windows service to launch the batch. The 3rd party software will show success since it was able to successfully call the .BAT file to run, however it has no control of the other EXE's that's being called within the script. I'm able to run a simple .BAT file in the 3rd party software, for example a copy command. The .BAT I'm having problems with calls a compiled EXE which launches Excel to create a file to a location. The .bat file calls something.exe, which then calls Excel.exe: C:\something.exe -o D:\filename.xlsm C:\filename.xlsm refresh_pivot Do you think it's a permissions issue? I used Process Monitor to verify any Access Denied errors but everything seems to be working according to the trace. It worked on a non-64-bit OS, I'm currently using Win2008 64-bit.

    Read the article

  • How do you create virtual folders from saved search

    - by Jérôme Radix
    I would like to have on unix-like platforms, the same functionality as to Windows 7 Library folders (aka virtual folders) you see in Windows Explorer. Gnome Nautilus do that kind of virtual folders through saved search. But I want a system-wide solution, not a gnome-wide solution. Is there a tool that creates virtual folders from the concatenation of multiple search queries (the result of multiple find commands ?). The solution should index files for better performances and you should be able to define the default folder for copy operations. I assume the solution of this kind of problem certainly use FUSE, but I can't see a complete solution to this kind of task in FUSE applications.

    Read the article

  • misaligned raid partition in Ubuntu 10.04

    - by Linux Jedi
    I attached two identical hard drives to my linux machine. Then using gparted I formated the first 1024 mb at the beginning of each drive as linux swap space. Then I went into system-administration-disk utility. In there I went to file-create-RAID array. I selected the remaining space in each of the two identical hard drives and created a striped raid array. After the array was created, a warning message appeared. It said "The partition is misaligned by 522240 bytes. This may result in very poor performance. Repartitioning is suggested." What do I do now? As far as I can tell, the partitions are identical.

    Read the article

  • Intel HD 4000 and Nvidia GT 650 working together on laptop

    - by Juan
    My new win7 Acer notebook has i5 CPU with Intel HD 4000 and Nvida GT650 GPU. Obviously monitor is plugged to Intel HD. In Nvidia control panel I can configure PhysX but that doesn't help. Windows system rating shows high gaming experience and average/low windows aero experience. What does that mean? Does my laptop use nvidia for games/3d apps nad Intel HD 4000 for aero? Should I disable Intel HD in bios, but how to plug monitor to nvidia? Or should I leave everything like now because everything works as it suppose to work? Here is image capture of some states: http://oi47.tinypic.com/34p0qp4.jpg

    Read the article

  • Fatal error: Out of memory (allocated ...) (tried to allocate ... bytes) not due to memory_limit setting

    - by Lorenz Meyer
    Since a few days, I get the following error on my server: Fatal error: Out of memory (allocated 262144) (tried to allocate 393216 bytes) Usually this error is due to a memory consumption that is exceeding the configured memory_limit, but in my case there is no relation. The memory_limit is set to 128MB, and in this case, we not even reach 1MB. Also the server does not have a big load, in fact it is an intranet server, and there are just a few people conected to it. System: Windows Server 2003, 1Go RAM, only 600 MB used. Apache 2.2.4 PHP 5.2.3 This error is appearing randomly. The memory limit reached also is randomly between a few kB to a few MB. Sometimes restarting Apache is required to get rid of the error, sometimes it disapears itself. Restarting Apache or the entire server helps temporarily. Where could this problem come from ? How could I narrow down the error source ?

    Read the article

  • Looking for Firebird GUI

    - by EAMann
    I use phpMyAdmin to manage all of my MySQL databases and SQL Management Studio Express to manage my MS SQL databases. Now I need to start working with Firebird, and I'm looking for a tool along the lines of SQL Management Studio to manage those databases as well. I can be flexible with the UI and can learn a new system, so if there's something freely available that will do the trick but isn't quite the same as SQL management Studio I think I could adapt. Bottom line: What free tools are available that provide an in-depth GUI for Firebird?

    Read the article

  • Is Windows 8 Pro usable without activation?

    - by DzinX
    There seems to be little information about the way activation system works in Windows 8, especially for consumer versions of Windows (Core/Pro, not Enterprise). I already know that Microsoft now requires you to enter a key before even starting the installation process, and that it tries to connect to the Internet and activate this key as soon as possible. Sometimes, though, you don't have Internet connection when installing Windows 8. What then? Can you install Windows? Can you use it? What are the limitations? For how long can you keep your copy of Windows usable without activation?

    Read the article

  • Why is the size of windows off by 226x238 if defined via the Window Rules?

    - by Bobby
    I have installed Sawfish 1.8.2 from source on my new Ubuntu 12.04 installation following the Debian instructions, but I had this problem also with the stock 1.5.3. Whenever I define dimensions in the Window Rules for a window, the size is off by exactly 226x238 pixels, which means that 100x100 turns into 326x328. That's a very odd behavior, given that Sawfish is saving and loading the dimensions of the windows correctly (if saved via the window menu). Some additional system information: $ uname -a Linux Dagon 3.2.0-24-generic-pae #39-Ubuntu SMP Mon May 21 18:54:21 UTC 2012 i686 i686 i386 GNU/Linux $ sawfish --version sawfish version 1.8.2 nvidia proprietary driver, 9600GT Two monitors, 1920x1080 + 1440x900 in one session. Positionng the windows is working fine, only the dimensions are off by that odd number. Does somebody have an idea why?

    Read the article

  • How much power supply do I need for my server, and could a shortage be causing my odd crashing?

    - by dolan
    I have 5 servers, all with similar hardware (i7, four 2tb 7200rpm drives, two 4tb 5400rpm drives, 430 watt power supply), and lately the machines have been freezing up. This has gotten worse in the last day or so, and I can't pinpoint any explanation. One recent change was adding the two 4tb hard drives. The crashes happen most often while running a large Hadoop job, so I was originally thinking the load was causing some issues, but last night one server just froze without any heavy load on the box (or so I think), other than HDFS (Hadoop's distributed file system) was probably rebalancing itself since two of the five nodes were offline. If I plugin a monitor and keyboard to one of these frozen machines, I can't get any response or feedback on the screen. Any ideas on possible points of failure and/or different logs I can look at to investigate? Thanks Edit: The systems are running Ubuntu 10.04 Edit 2: More on hardware: intel core i7-930 bloomfield 2.8ghz processor (quad core) 12gb (6 x 2gb) kingston ddr3 1333 ram antec earthwatts green 430 power supply msi x58m lga 1366 motherboard

    Read the article

  • IIS FTP 7.5 Data Channel Problem (SSL)

    - by user59050
    Hey there I wonder if anyone can get me in the right direction. I am setting up both a FTPS Client and Server, FTPS Server using Microsoft’s iis FTP 7.5. On the client side it will be running on Linux and I am using M2crypto for the openssl wrapping (python). I am worried the problem is on the server side (iis7.5) due to the following discovery : If I host using Filezilla with BOTH the control and data channel being forced to be encrypted it works 100% (100% file transmission), if i use iis as the server everything works up to the point when the data channel takes over... i.e. all data of the retrieved file is already received correctly in my basket! The ftp server just won't send the final '226 Transfer complete.' on the cmd socket. Why? If i force the client or server to close the connection the file is 100% intact....If i use iis 7.5 with forced encryption on control channel all works 100% as long as i don’t force data channel... Here are some screenshots to demo this... Client View after Kill Client : pics @ http://forums.iis.net/p/1172936/1960994.aspx#1960994 Summary : We can establish the connection, do directory listings, start the upload, see the file (0bytes) created on the server but then the client hangs. If we terminate the client, the uploaded file on the server suddenly jumps up to full size.

    Read the article

  • unicorn and nginx, went wrong

    - by achempion
    I try to deploy my app via capistrano. It was done, but when I start to nginx and show my site in the browser I see 'We're sorry, but something went wrong.' It is bad. I use unicorn. See my configs https://gist.github.com/3904032 I try to start server via rails s -e prodiction and it's work! I think that this error may be because I can't restart server root@li272-194:~# /etc/init.d/nginx restart Restarting nginx: the configuration file /etc/nginx/nginx.conf syntax is ok configuration file /etc/nginx/nginx.conf test is successful [emerg]: bind() to 0.0.0.0:80 failed (98: Address already in use) [emerg]: bind() to 0.0.0.0:80 failed (98: Address already in use) [emerg]: bind() to 0.0.0.0:80 failed (98: Address already in use) [emerg]: bind() to 0.0.0.0:80 failed (98: Address already in use) [emerg]: bind() to 0.0.0.0:80 failed (98: Address already in use) [emerg]: still could not bind() nginx. any ideas? nginx log 2012/10/17 02:57:41 [error] 3271#0: *1 could not find named location "@myapp", client: 91.192.62.77, server: 178.79.153.194, request: "GET / HTTP/1.1", host: "178.79.153.194" 2012/10/17 02:19:08 [crit] 2448#0: *8 connect() to unix:/srv/zarcon/shared/unicorn.sock failed (2: No such file or directory) while connecting to upstream, client: 91.192.62.77, server: zarkon, request: "GET / HTTP/1.1", upstream: "http://unix:/srv/zarcon/shared/unicorn.sock:/", host: "178.79.153.194"

    Read the article

  • Restreaming video from XSplit to multiple JustinTV/TwitchTV channels in different resolutions and bitrates

    - by lmojzis
    I have a really simple question but the answer may be a little more complex I guess. Okay. Let's go. I have an Application called Xsplit Broadcaster (http://www.xsplit.com/). It supports streaming video through RTMP. Now what I want to do is this: +--(720p)--> TwitchTV FirstChannel XSplit --(720p RTMP)-->[MyTranscodingServer]--+ +--(360p)--> TwitchTV SecondChannel Is there a simple way to do this? Additional info: Both channels accept standard RTMP stream on their RTMP endpoint using either username/password or streamkey. The server operating system is GNU/Linux

    Read the article

  • Is it possible to have a scheduled task click a button in Windows?

    - by Jeremy Lattimore
    One of my clients has an application that can not be run from command line or via API but needs to be run on an scheduled basis. It would require that I launch the application if it's not already running and then click a button. This task would exist on a server. I had read about AutoHotKey but it doesn't sound like it can click buttons from a Scheduled Task. Does anyone have ideas on how to do this? NOTE: As I answered below... I know this is a hack. It would be better to have an app that can actually be scheduled to run. Unfortunately (for me) this application isn't going to be replaced as it is connected to the client's beloved inventory management system and the entire deal is proprietary. So I just need to find an option to attempt this (might not be completely reliable I know). Thanks.

    Read the article

  • How do you create virtual folders from saved search

    - by Jérôme Radix
    I would like to have on unix-like platforms, the same functionality as to Windows 7 Library folders (aka virtual folders) you see in Windows Explorer. Gnome Nautilus do that kind of virtual folders through saved search. But I want a system-wide solution, not a gnome-wide solution. Is there a tool that creates virtual folders from the concatenation of multiple search queries (the result of multiple find commands ?). The solution should index files for better performances and you should be able to define the default folder for copy operations. I assume the solution of this kind of problem certainly use FUSE, but I can't see a complete solution to this kind of task in FUSE applications.

    Read the article

  • postfix: force server to send mail outside of localhost

    - by LoneWolfPR
    I have a php file that sends mail using the mail() function. The problem is one of the forms sends to a domain that is registerred on my server while having the mail handled on a different server. Postfix looks locally only. When it doesn't find the email address is rejects the message. How can I configure postfix to send mail to all domains through the internet and not locally? Update Ok. So it wasn't a postfix issue at all. I simply needed to turn off mail to that domain from the command line. For anyone that needs that command it is (at least on my system): /usr/local/psa/bin/domain --update example.com -mail_service false

    Read the article

  • Why doesn't NFS recognize a new UID?

    - by user76177
    I have two servers running RHEL6. I have root access to both. The main server, which I will refer to as server, is a database server. The application server, which I will refer to as client, mounts a directory from server via NFS. There is a user, appuser, on both client and server. However, appuser's UID on client is 502. appuser's UID on server is 506. Both users need read and write capability on the NFS share. To facilitate this, I made the share owned by appuser on server. Running id appuser on each yields: uid=506(appuser). Of course, client does not recognize that ownership, since appuser has a different id on client. So I did the following: Changed UID of user in /etc/passwd on client to be 506. Changed ownership of appuser's $HOME on client to be appuser again so that I could log in. Now, when I go to look at the NFS share from the client side, I see that it is owned by 502. 502 is the OLD id for appuser on client. I can't change ownership of the NFS share from client, since that is a volume that physically resides on server. I need to make sure that the NFS share shows ownership of appuser from both server and client. What step have I missed since changing the appuser id on client? NOTE: I have not rebooted client (or anything else.)

    Read the article

  • INtel SSD RAID Trim

    - by Mike Pateras
    I just found this article, that says that Intel now supports Trim for SSD RAIDs. It links to this download page. I'm pretty excited about that, but I'm a little confused. There seem to be two sets of drivers, an executable and something that's bootable. I ran the executable. Is that just to apply the drivers to my system now, and are the bootable drivers so that if I re-format, I won't have to re-run everything? Do I need to do both? And is there a way to check if it worked? I'm running an i7 in Windows 7 (ASUS P6T Deluxe Motherboard), if that's significant.

    Read the article

  • PC is randomly shutting off

    - by Bobby
    I have a PC which is randomly shutting off (the hard way, like someone pulled the plug). It's no heat problem, because this can happen after 2 minutes of operation or after 4 hours and I checked the temperature. It's also no problem of the operation system, because it will also do that while in BIOS. If it happens once, you can be sure that it will happen within the next hour another two or three times, and then it works again for over a week or even longer. Possible causes I've thought about so far are: Voltage Fluctuation Loose contact (already replugged all cables within the PC, no luck) Faulty PSU Maybe someone has more ideas to this? P.s.: I don't have the hardware specs with me right now, I'll post them as soon as I get them.

    Read the article

  • iTunes memory usage

    - by Jordan S. Jones
    Why does iTunes use upwards of 70 megs of ram when it is minimized to my system tray playing music? -- Update -- I understand that iTunes is a resource hog :) What I'm trying to find out, is what part of iTunes is using all that ram. Is it the music library? If I have a smaller music library, will it use less ram? Is it loading all the Album Artwork into ram for some dumb reason? Additionally, is there any recommendations on what someone could do to reduce the amount of ram it is using?

    Read the article

  • COM+ applications deployment behaves different on different systems

    - by sharptooth
    In order to give my COM+ application enough credentials I want its components to be instantiated under "Local Service" account. When I create a server application with a wizard on Win2k3 it offers to choose under whom to instantiate components - "Local Service" is one of choices. But on WinXP "Local Service" is not offered at all in the wizard. When I open the "Identity" tab of the COM+ application under Win2k3 there'a a handful of choises, "Local Service" included, and I can select any of them. But on WinXP the same "Identity" tab only offers "Interactive user". What does this difference depend on? Does it depend on the system or on something else?

    Read the article

  • Common filesystem for servers behind a rackspace load balancer

    - by thanos panousis
    Our PHP application consists of a single web server that will receive files from clients and perform a CPU-intensive analysis on them. Right now, analysis of a single user upload can take 3sec to conclude and take 100% CPU. This makes our system capacity amount to 1/3 requests per second. My team's requirement is to increase capacity without a lot of code reengineering. A possible solution would be to set up a load balancer in front of multiple servers running the same app, connecting to a common DB. The problem is that the analysis outputs files on disk. A load balancer would increase capacity, but then files won't be available between servers so consequent client requests may fail. We are hosted on Rackspace, is there a way to configure some sort of "common" storage for all servers, without having to rewrite our file persistance code? Current code relies on simple fopens etc. What are our options?

    Read the article

< Previous Page | 1770 1771 1772 1773 1774 1775 1776 1777 1778 1779 1780 1781  | Next Page >