Search Results

Search found 22041 results on 882 pages for 'kill process'.

Page 704/882 | < Previous Page | 700 701 702 703 704 705 706 707 708 709 710 711  | Next Page >

  • Windows Server 2012 Can't Print

    - by Chris
    I know this may sound incredibly stupid and there is probably an easy solution but I can't seem to find it. Friends of mine recently upgraded their server for their small business from the POS old one. New hardware and a change from Windows Server 2003 to Windows Server 2012. I've got everything they need transfered over and running except for printing. They need to be able to print to printers in the vans their technicians use from the server via remote desktop. In other words the use a laptop to remote desktop into the server and need to print invoices out from the remote server to printers attached locally via usb. On the old server they just installed the identical driver and that was it, they could print as needed. On this server no matter what we seem to do we can't get it to print remotely, and in the process we also discovered that the server can't even print to the network printer. It sees the printer on it's network and it sees (through redirect) the printers in the vans but when you hit print it claims it did and nothing happens. There isn't an issue with the printers themselves as every other device we have can print to them without issues. Is there some setting that is inhibiting the server from printing? Is there something I need to install (print server?) to add the functionality? Thanks in advance for helping me out here

    Read the article

  • Replacing HD in an MacOS 10.6.8 server caused all shares to fail

    - by Cheesus
    I'm hoping someone might have a helpful suggestion about this problem. We have 2 MacOSX servers available for file sharing. (quad Xeons - 2GB RAM, both 10.6.8), No.1 is an Open Directory Master with 50+ user accounts, No.2 has only 2 local accounts (/local/Default) and looks at the OD Master for all user accounts (/LDAPv3/10.x.x.20/) Both servers have 3 internal HD's, The boot volume with only Server OS and minimal Apps. A 'DataShare' HD (500GB) and a backup drive (500GB). After upgrading the DataShare HD in Server No.2 from a small internal HD (500GB) to larger capacity (2TB) drive, users are unable to connect to shares on Server No.2. Users get an error "There are no shares available or you are not allowed to access them on the server" The process I followed was to use Carbon Copy Cloner to create an exact copy of the original data drive (keeps all ownership data, UID, permissions, last edit date and time). Everything booted up ok, no indication there was any issues. (Paths to the sharepoint look good) Notes during troubleshooting - Server1 is operating perfectly, all users can access shares and authenticate etc. - I've checked the SACL (Server Access Control List) settings is ok. - On Server2 in the Server Admin' app, I can see all the shares listed ok. The paths seem valid, I can disable / reenable the shares, no errors. - On Server2 'workgroup manager' lists all the accounts from the OD Master in the LDAP dir view. All seems fine from here. Basically everything looks normal but no file shares on Server2 can be accessed from regular users.

    Read the article

  • Xubuntu stuck after login

    - by viraptor
    How can I debug an issue with Xubuntu 12.04 (fresh install) which just waits idle after a login for about 30 seconds? The login screen is delayed correctly. After login, I get my desktop background, but no panels or auto-starting apps. It doesn't seem to be an authentication/pam issue, because I can login without delay at the console while the graphical session is still stuck. There's no disk or cpu activity and no obvious respawning of any process when I look at htop. There's nothing obviously wrong in .xsession-errors. Most interesting errors: openConnection: connect: No such file or directory cannot connect to brltty at :0 WARNING: gnome-keyring:: couldn't connect to: /tmp/keyring-wFn4VR/pkcs11: No such file or directory ... (polkit-gnome-authentication-agent-1:2131): polkit-gnome-1-WARNING **: Failed to register client: GDBus.Error:org.freedesktop.DBus.Error.ServiceUnknown: The nam e org.gnome.SessionManager was not provided by any .service files ** Message: applet now removed from the notification area ** Message: using fallback from indicator to GtkStatusIcon ... (xfce4-indicator-plugin:2176): libindicator-WARNING **: IndicatorObject class does not have an accessible description. ... (xfce4-indicator-plugin:2176): Indicator-Application-WARNING **: Unable to get application list: Operation was cancelled Bootchart seems to end before I login, so it's not that helpful. Where else can I look for information?

    Read the article

  • Clone Microsoft VM > SharePoint issues

    - by Rob
    Hi, We have an existing (production) Hyper-V VM that I want to clone to create a staging server. This server is NOT on a domain/active-directory - uses all local computer accounts. It is Windows 2008 OS, SharePoint 2007, SQL server 2008, Reporting Services, and some custom line-of-business web-applications. We rename the server etc as per the documentation we have, but are having troubles specifically with SharePoint. Some of the sites do not come up, and some issues adding/updating site settings such as site owners. Does anyone know of a definitive resource to this process? thanks Update: We seem to have most working, but the Shared Service Provider, and the SSP's admin site, are not working. In addition, the custom web-parts that reference IIS Session objects are failing (which seem to be related to the Share Service Provider). All the Windows user accounts for various services are renamed during the server cloning, but the windows account names in SQL server are not automatically changed. We've tried to change them in SQL server but still does not seem to work Seems like allot of work - and am thinking that there must be some guidance out there. also getting this: NullReferenceException: Object reference not set to an instance of an object.] Microsoft.Office.Server.Administration.SqlSessionStateResolver.System.Web. IPartitionResolver.ResolvePartition(Object key) +135

    Read the article

  • Users are getting a temporary profile

    - by Serhiy
    A bit about current setup: It is windows 2008 R2 AD servers (all of them are 2008R2) and couple locations which set as Sites. Each location has DFS on AD server. Roaming profiles are not used nor configured. Users have their home folder configured as mapped S: drive to DFS shared folder. For example: in profile tab user has: Home Folder - connect - S: to \\domain.com\dc\users\%username% We also have redirected Desktop, Documents and Downloads folders to \\domain.com\dc\users. Everything was fine. Suddenly (today), users in most locations lost their local profile (both XP and W7 desktops) and got temporary profiles. Also, it looks like local profile was created today (from folder properties). I checked events at couple machines and there is not errors related to profiles or logon process. I do not see issues in event logs at servers as well. Basically, I run out of ideas what is wrong and why machines lost their local profiles. PS: Laptop users do not have their folders redirected, but lost profiles as well.

    Read the article

  • Google Chrome is running my system out of memory

    - by jasondavis
    I am running Windows 7 x64 with 12GB of RAM I often have multiple windows and a ton of tabs open. I use the extension Session Buddy to restore all my windows and tabs once the memory gets too high. So my 12gb of ram will get up to around 93% used because of Chrome, now I can close chrome down and restore the same amount of windows and tabs and it will only use about 25% of memory, it then over time increases back up to the 90% zone after several hours. It seems that when I close tabs, instead of freeing that memory up, it doesn't so that is why the huge increase of memory usage as new tabs are opened and closed it just adds up, this sounds like a huge bug in chrome. Just for an example I just re-booted my system, I only have 1 window with 4 tabs open and in the task manager, it shows 29 chrome.exe processes I then killed all chrome processes and opened a chrome window with just 1 tab, it made 27 chrome.exe processes. Is this an issue that others have? More importantly, is there a fix? UPDATE I just read that each plugin and extension creates a chrome.exe process, I then couunted 24 extensions so that helps explain a portion of the large processes. Still not sure about memory not being freed up though!

    Read the article

  • How do I get started with the M-Project is a Mobile HTML5 JavaScript Framework on Windows?

    - by Bruce Whealton
    This website for this great tool, call the M-Project says that I will need to add a doskey like this: doskey espresso=node C:\Path\To\Espresso\bin\espresso.js $1 $2 $3 $4 (It is a tool for creating Native mobile apps with the Phonegap/Cordova library, and it seems to be something that would be very helpful in this process). If I enter that at a command prompt in Windows 7 or 8, it's not going to stick around or persist. Is it an Environment Variable? Then it says at this page: http://www.the-m-project.org/ that it will work with Windows with some additional tools installed. The next line says that Node.js is needed, so I don't know if that is the additional tools mentioned above. Also, in an old discussion I read that one could just install cygwin. What would that do? It doesn't actually install any of the Linux distributions. I did install Ubuntu 12.04 server with VirtualBox because I thought it would be good to learn more about using Linux as I manage websites that are on a dedicated host. Anyway, the suggestion to install cygwin did not go into any details... I guess it would allow one to create a bash profile?? which would only work in a cygwin Command Line Window. Is that right? Isn't there a similar file that one could use in Windows or an Environment Variable that one could set to be able to achieve the same result? Thanks, Bruce

    Read the article

  • what is the reason behind window service stopped ,whether its due to LAN problems or any other issues

    - by Steve
    I have a windowservice which named Trunk which stopped one day i just want to know the reason behind it? this is an entry in the logs, Nov 15 17:54:04.318 :Trunk-1516:Trunk:handle_control_event:Received CTRL_LOGOFF_EVENT, ignore it Nov 25 15:54:52.157 :Trunk-1516:Trunk:ERROR - Process Restart Count (5) Exceeded for:C:\Program Files\secon\11.1.4\bin\vmd Nov 25 15:54:52.157 :Trunk-1516:Trunk:Stopping Trunk ... Nov 25 15:54:52.314 :Trunk-1516:Trunk:Shutting down, signaled C:\Program F Nov 20 15:54:20.345 :SCBridge.RegisterBridge:Exception in method: ScUtility.ScCommandException (0xa08990002): Exception from HRESULT: 0xa08990002 Supplemental Information: None available. at ScServer.ScServiceProcessorRegistryManager.Attach(String serviceProcessor, ScClientInformation clientInfo, FORCE_ATTACH_SPEC forceAttachToMaster) at ScServer.ScServiceProcessorRegistry.Attach(String serviceProcessor, Object clientInfo) at ScServer.ScServiceProcessorRegistry.Attach(String serviceProcessor) at ServerControlInterface.SCBridge.RegisterBridge(String SPName) for system APOLLOSP0 attempting to attach and register with the Bridge i had seen service is registered with specific account, so i thought that user logged off from the machine that may be the reason behind it or any LAN disconnection problem . But Having taken another look at the above entry we seem to have a constant failure being generated in vmd which causes Trunk to detect vmd requires a restart. Most of the time it works OK and the restart count is anything up to 4. In this case the Trunk log confirms that the Restart Count is 5 and so is considered to be exceeded. Presumably, this triggers the termination of the other services and Trunk is actually doing its job.So, coould this just be a timing issue and we need to increase the tolerance level (i.e restart count) or do we need to address the 0xa08990002 error in vmd?

    Read the article

  • QR Codes Printing, but Not Printing Correctly - Any Ideas?

    - by SDS
    I am mail merging some QR codes via file paths stored in Excel into a label template in MS Word 2013. I have the whole process with the Ctrl+F9 working properly, but I am stumped on this: On the 30 label sheet I am trying to print I have 6 labels that are repeating information, and stored in duplicated rows in Excel for this print job. All of the labels have 2 images on them, one is a logo and the other one is a unique QR code for that person. For the first set of 6 labels that print out, everything works perfect. However, from the 2nd time the information is printed onward all of the merged fields and logo look correct, but the QR codes are printing strangely. Basically it's the QR code as I want it, but with a copy of itself covering the top left 25% of the QR code. Print preview doesn't show this happening, only once it's printed does it come out like this. I've been trying everything I can think of to fix this and don't know what to do. So far I've: Recreated the document several times, tried duplicating the images in the source folder and giving the links in the Excel document new file paths in case the mail merge feeding from the same .jpg was an issue (even though it's not a problem with the logo) Any help or insight is greatly appreciate because this is a test run for a larger batch run that I need to get done soon :( Thank you!

    Read the article

  • Automatic o/s reset on a dedicated internet browsing Windows 7 pc.

    - by camelCase
    I have just purchased a new Acer Revo nettop PC for dedicated internet browsing. It will be the only pc on a home network. My original plan was to install one virtual PC for family browsing, another for remote web based server administration and ban browser use from the host Windows 7 o/s. The idea was that I could recover to a fresh VHD image once a week to eliminate any build up of malware inside the browser VMs. However now I am looking for alternative solutions since the Intel Atom cpu does not have hardware VT support which Windows Virtual PC requires. Would it be possible to engineer some type of routine overnight host o/s wipe and recovery? I guess cyber cafes do something like this? The only user data that would need to be retained across a recovery would be browser bookmarks but these could be exported to remote service. Edit 1: I am thinking the o/s reset could be done via some disk image recovery process. Edit 2: Just had a brainwave. Routine browsing could be done via the new Google Chrome O/S. I have just seen a video of the Google Chrome o/s booting off a usb pen drive in seconds.

    Read the article

  • yum fails installing php53-devel.x86_64

    - by coding_hero
    I need to recompile php on a Fedora server because I need to use the --enable-zip flag. When trying to install the devel package, I get the following message. This is after a 'yum clean all': yum install php53-devel.x86_64 Loaded plugins: rhnplugin, security rhel-x86_64-server-5 | 1.4 kB 00:00 rhel-x86_64-server-5/primary | 4.9 MB 00:00 rhel-x86_64-server-5 14161/14161 Setting up Install Process Resolving Dependencies --> Running transaction check ---> Package php53-devel.x86_64 0:5.3.3-13.el5_8 set to be updated --> Processing Dependency: php53 = 5.3.3-13.el5_8 for package: php53-devel --> Finished Dependency Resolution php53-devel-5.3.3-13.el5_8.x86_64 from rhel-x86_64-server-5 has depsolving problems --> Missing Dependency: php53 = 5.3.3-13.el5_8 is needed by package php53-devel-5.3.3-13.el5_8.x86_64 (rhel-x86_64-server-5) Error: Missing Dependency: php53 = 5.3.3-13.el5_8 is needed by package php53-devel-5.3.3-13.el5_8.x86_64 (rhel-x86_64-server-5) You could try using --skip-broken to work around the problem You could try running: package-cleanup --problems package-cleanup --dupes rpm -Va --nofiles --nodigest Output of 'yum repolist': # yum repolist Loaded plugins: rhnplugin, security repo id repo name status rhel-x86_64-server-5 Red Hat Enterprise Linux (v. 5 for 64-bit x86_64) enabled: 14,161 repolist: 14,161

    Read the article

  • Moving a site from IIs6 to IIS7.5

    - by Sukotto
    I need to move a site off of IIS6 (Win Server 2003) and onto IIS7.5 (Win Server 2008) as soon as possible. Preferably tomorrow. The site itself is a delightful mix of classic asp (vbscript) and one-off asp.net (C#) applications (each asp.net app is in its own virtual dir and has a self-contained web.config). In case it's relevant, this is a sort of research site made up of 40 or 50 unconnected microsites. Each microsite is typically a simple form allowing a user to submit a form, which then runs a Stored Proc on a sqlserver db and displays a chart and/or table of the results. There is very little security to worry about. The database connection info is in a central file (in the case of the classic asp) or app's individual web.config (lots of duplication there) To add a little spice to the exercise... I have no idea how to admin IIS The company no longer employs the sysadmin or the guys who set this thing up. (They're not going to employ me much longer either but my sense of professional pride does not permit me to just walk away from this task). The servers are on mutually firewalled networks and I have to perform a convoluted, multi-step process to copy anything from one to the other. Would someone please point me to a crash-course tutorial for accomplishing the above? I have: a complete copy of the site's filesystem on the new box installed the 3rd party charting tool on the new system a config.xml file from the "all tasks - save configuration to a file" right click menu. There doesn't seem to be a way to import it on the new system however. The newer IIS manager has a completely different UI and I'm totally lost. Please help.

    Read the article

  • How can I install git on RHEL 6?

    - by JR.Xyza
    I'm trying to install Git on a RHEL6 development server, I have experience with Ubuntu but this is my first time working with RHEL (I'm a developer trying to fill in for a recently departed Linux Sysadmin). I've set up two additional repos (EPEL and IUS) for other packages needed for a Magento install. Output of yum repolist: [root@box]# yum repolist Loaded plugins: product-id, security, subscription-manager Updating certificate-based repositories. repo id repo name status epel Extra Packages for Enterprise Linux 6 - x86_64 7,841 ius IUS for RHEL 6Server - x86_64 135 Most of what I've read indicates a simple 'yum install git' should work with EPEL enabled, but I get the dreaded [root@box]# yum install git Loaded plugins: product-id, security, subscription-manager Updating certificate-based repositories. Setting up Install Process No package git available. Error: Nothing to do Same goes for git-daemon, etc. I've tracked down a number of git RPMs such as this one at repoforge but they require a train of dependencies that seems to never end. I've also toyed with compiling it manually but the rabbit hole to get make working seems to go even deeper. I'm convinced there's a simple oversight somewhere keeping me from being able to install from the EPEL repo, but I'm a rookie at all this. Thanks in advance for help/pointers/additional resources.

    Read the article

  • Running batch file through a service.

    - by wallz
    I'm trying to schedule a batch file to run through a third party application, however the output file doesn't get created in the directory. If I run the .BAT file from the command line, it works and the file gets created. Also using the Windows Schedule will also succeed. Basically, the 3rd party software will schedule the .BAT file and it shows success within the 3rd party user interface. The difference between running from the command prompt and the software, is that the software will use its Windows service to launch the batch. The 3rd party software will show success since it was able to successfully call the .BAT file to run, however it has no control of the other EXE's that's being called within the script. I'm able to run a simple .BAT file in the 3rd party software, for example a copy command. The .BAT I'm having problems with calls a compiled EXE which launches Excel to create a file to a location. The .bat file calls something.exe, which then calls Excel.exe: C:\something.exe -o D:\filename.xlsm C:\filename.xlsm refresh_pivot Do you think it's a permissions issue? I used Process Monitor to verify any Access Denied errors but everything seems to be working according to the trace. It worked on a non-64-bit OS, I'm currently using Win2008 64-bit.

    Read the article

  • High disk I/O activity in CentOS server

    - by triiim
    I have about 16 websites in a CentOS dedicated, and I am having some problems on high traffic hours, it seems to be a high disk I/O activity causing a general slowdown. I've installed atop and this is what I see on the bottom (the server has been restarted thats why the values are so low): *** system and process activity since boot *** PID RDDSK WRDSK WCANCL DSK CMD 1/18 2176 1.7G 7.3G 854.4M 39 mysqld 671 1248K 3.0G 0K 13 flush-8:0 566 0K 1.1G 0K 5 jbd2/sda2-8 2401 124.2M 529.1M 22408K 3 crond 2032 2.2G 502.0M 0K 12 nginx 2360 425.8M 115.3M 4188K 2 httpd flush-8:0 and jbd2/sda2-8 are the processes I see with iotop using 99% on the IO column, and they are the processes that write the most on the hdd (after mysql). From what I saw in google this could be caused by some ext4 related bug, the current kernel is: Linux srvr.com 2.6.32-71.29.1.el6.x86_64 #1 SMP Mon Jun 27 19:49:27 BST 2011 x86_64 x86_64 x86_64 GNU/Linux I asked the hosting support to update the kernel and they tried but they now say that the server wont boot with the new installed kernel and they had to go back to the previous, they are not helping very much. Does someone has any idea how could I solve the high disk usage caused by flush-8:0 and jbd2/sda2-8 processes?

    Read the article

  • Associate email account with "Personal Folders" Outlook data file?

    - by TheLQ
    In the process of migrating email servers I've run into an interesting problem: In Outlook 2007 you have the default "Personal Folders" item. This contains the email for the account that was origionally setup with Outlook. My issue is that I have deleted the account associated with that and created an entirely new account. So now I have "Personal Folders" and "[email protected]". However I can't delete "Personal Folders". nor associate "[email protected]" with that PST file. Deleteting it in Outlook (Tools Account Settings Data Files) gave the error "The default data file cannot be removed, because it is your default delivery location. After you have selected a different default delivery location, your current file can be removed." Deleting the PST file itself (outlook.pst) made outlook demand where its default file . would be. So I selected my "[email protected]" PST file and restarted Outlook. Now "Personal Folders" is called "[email protected]", but I still have a duplicate account called this. Which is bad. Worse, my email is associated with the duplicate PST, not the default. How can I associate my email with my default PST or delete the default PST entirely? Luckily I have backu

    Read the article

  • How to simulate Apache [END] flag on a redirect?

    - by Javier Méndez
    For business-specific reasons I created the following rewrite rule for Apache 2.2.22 (mod_rewrite): RewriteRule /site/(\d+)/([^/]+)\.html /site/$2/$1 [R=301,L] Which if given an URL like: http://www.mydomain.com/site/0999/document.html Is translated to: http://www.mydomain.com/site/document/0999.html That's the expected scenario. However, there are documents which name are only numbers. So consider the following case: http://www.mydomain.com/site/0055/0666.html Gets translated to: http://www.mydomain.com/site/0666/0055.html Which also matches my rewrite rule pattern, so I end up with "The web page resulted in too many redirects" errors from browsers. I have researched for a long time, and haven't found "good" solutions. Things I tried: Use the [END] flag. Unfortunately is not available on my Apache version nor it works with redirects. Use %{ENV:REDIRECT_STATUS} on a RewriteCond clause to end the rewrite process (L). For some reason %{ENV:REDIRECT_STATUS} is empty all the times I tried. Add a response header with the Header clause if my rule matches and then check for that header (see: here for details). Seems that a) REDIRECT_addHeader is empty b) headers are can't be set on the 301 response explicitly. There is another alternative. I could set a query parameter to the redirect URL which indicates it comes from a redirect, but I don't like that solution as it seems to hacky. Is there a way to do exactly what the [END] flag does but in older Apache versions? Such as mine 2.2.22. Thanks!

    Read the article

  • Does migrating 2 domain controllers between 2 datacentre requires both virtual machines to be shut down at the same time?

    - by Imagineer
    I was attempting to migrate 2 virtual machines that are domain controllers between 2 datacentres running ESX 3.5 and ESX 4.1. I was advised to shut down both domain controller at the same time during the migration process. This is to avoid USN Rollback and other replication issues. The following are the steps that I was planning to perform: 1. Shutdown both DC. 2. Copy both VMs files across to new datacentre using Veeam FastSCP (connection to both vCentre through IP address instead of hostname) 3. Power them up at new datacentre. 4. Configure Network interface/DNS/DHCP for both DCs in new datacentre I have tried to use Veeam FastSCP rather than VMware Standalone Converter is because its copying rather than converting. Someone also suggested that I use backup and restore app like Veeam backup and replication software. Sounds like a simple job, but after shutting down both DCs, the transfer rate using FastSCP is so slow registering only 1KB/s as oppose to the normal 1MB/s (or more). When that attempt to transfer failed, I tried to cold clone both DCs resulted in the both ESX hosts get disconnected. I have tried troubleshooting by referring to this - VMware KB - Diagnosing an ESX Server that is Disconnected or Not Responding in VirtualCenter It seems that the DNS being down is the caused of all unusual occurrence. The moment I powered up the DCs via VMware console command, the ESX host were able to connect to the vCentre again. How can I avoid such a pitfall again? Am I doing it correctly? Any help would be greatly appreciated! Thank you.

    Read the article

  • How To Boot with "mem=1024m" Argument using GRUB - Ubuntu 10.04

    - by nicorellius
    I am still working on this question. This new one is a different question so I thought it would be good to post a new question. Is this the proper protocol or should I have just edited the other question? I'm running Ubuntu 10.04 with the kernel 2.6.32-22-generic on a Toshiba Satellite laptop. When I enter the GRUB menu (I have Ubuntu 9.10 installed as well), I can choose which kernel to boot. I use scroll down to the one I want and press "e" and I expect to be able to enter mem=1024m and force the kernel to use this much memory. But when I run cat /proc/meminfo or look in the process manager after booting wth this argument I still see all the RAM: ~2 GB. Am I using this boot argument incorrectly? The boot configuration (before I add anything) looks like this: insmod ext2 set root=(hd0,1) search --no-floppy --fs-uuid --set 10270f21-1c42-494b-bd3f-813c23f6d\ 518 linux /boot/vmlinuz-2.6.32-22-generic root=UUID=10270f21-1c42-494b-b\ d3f-813c23f6d518 ro quiet splash initrd /boot/initrd.img-2.6.32-22-generic The way I did this was that I added the mem=1024m after the last line and pressed Ctrl+x (Emacs save and boot the kernel) and the system booted. I tried adding mem=1024m to the end and the beginning of this list and it appeared to not change the RAM allocation.

    Read the article

  • My yum repository able to search packages, but not able to install it in RHEL?

    - by mandy
    I set up yum from dvd. Following is the containts of my .repo file: [dvd] name=Red Hat Enterprise Linux Installation DVD baseurl=file:///media/dvd enabled=0. I'm able to search packages. However while installation I'm getting below error: [root@localhost dvd]# yum install libstdc++.x86_64 Loaded plugins: rhnplugin, security This system is not registered with RHN. RHN support will be disabled. Setting up Install Process Nothing to do My Yum Search output: [root@localhost dvd]# yum search gcc Loaded plugins: rhnplugin, security This system is not registered with RHN. RHN support will be disabled. ============================================================================= Matched: gcc ============================================================================= compat-libgcc-296.i386 : Compatibility 2.96-RH libgcc library compat-libstdc++-296.i386 : Compatibility 2.96-RH standard C++ libraries compat-libstdc++-33.i386 : Compatibility standard C++ libraries compat-libstdc++-33.x86_64 : Compatibility standard C++ libraries cpp.x86_64 : The C Preprocessor. libgcc.i386 : GCC version 4.1 shared support library libgcc.x86_64 : GCC version 4.1 shared support library libgcj.i386 : Java runtime library for gcc libgcj.x86_64 : Java runtime library for gcc libstdc++.i386 : GNU Standard C++ Library libstdc++.x86_64 : GNU Standard C++ Library libtermcap.i386 : A basic system library for accessing the termcap database. libtermcap.x86_64 : A basic system library for accessing the termcap database. Please guide me on this, I want to install gcc on my RHEL.

    Read the article

  • Run a rails server on Amazon EC2 [on hold]

    - by Jashwant
    Context: I've tried rubber gem, but that does not fulfill my requirements ( I needed to deploy on existing instance, so don't recommend me rubber) So, I followed this excellent tutorial http://stackoverflow.com/questions/15535140/installing-ruby-2-0-and-rails-4-0-0beta-on-aws-ec2 Now, I have ruby 2.0 and rails 4.0.0 running on AWS EC2. I successfully ran the server with RDS (mysql) as db and default webrick as server ( Using command rails server ) But, I've read that webrick is a development server and shouldn't be used at production. What I tried: I googled and came up with some alternatives. Capistrano Nginx / apache with passenger Passenger with Capistrano Unicorn Puma My Question: What exactly is capistrano / passenger ? Are they middleware to ease my deployment process ? I don't see any difficulty in doing rails server command. If they are just middleware, nginx with passenger and capistrano does not make any sense ? Why would I add a learning curve ( to learn nginx, passenger and capistrano configs) just to run my server ? I can just use nginx to deploy my app. Can't I ? What combination should I use on Amazon EC2 (or may be at any some other production server).

    Read the article

  • Partition is missing in /dev

    - by haimg
    I'm having a strange problem since I moved from Centos5 to Centos6. I have three disks, first two are used as a RAID1, and third one is a stand-alone backup disk that is not listed in /etc/fstab (it is mounded when needed and then unmounted). My problem: After a boot, /dev/sdc exists but /dev/sdc1 does not. Also, the links in /dev/disks are also absent for the first partition of sdc. Disk itself is fine, and if I hot-remove it and plug it back in, /dev/sdc1 appears ok and everything is working. My question: What subsystem manages auto-discovery of disks, partitions, etc. during the boot process (e.g. what creates /dev/disks/by-label)? How do I configure it to scan /dev/sdc too and create all relevant files and links in /dev ? Edit: Here's the relevant part of dmesg output (the only place sdc appears). It does list sdc1, but it's not in /dev! sd 1:0:0:0: [sdb] 1953525168 512-byte logical blocks: (1.00 TB/931 GiB) sd 3:0:0:0: [sdc] 976773168 512-byte logical blocks: (500 GB/465 GiB) sd 1:0:0:0: [sdb] Write Protect is off sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA sd 3:0:0:0: [sdc] Write Protect is off sd 3:0:0:0: [sdc] Mode Sense: 00 3a 00 00 sd 3:0:0:0: [sdc] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA sdb: sdc: sd 0:0:0:0: [sda] 1953525168 512-byte logical blocks: (1.00 TB/931 GiB) sd 0:0:0:0: [sda] Write Protect is off sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA sda: DMAR:[DMA Read] Request device [00:1e.0] fault addr 361bc000 DMAR:[fault reason 06] PTE Read access is not set sdb1 sdb2 sdb3 sdc1 sda1 sd 1:0:0:0: [sdb] Attached SCSI disk sd 3:0:0:0: [sdc] Attached SCSI disk sda2 sda3 sd 0:0:0:0: [sda] Attached SCSI disk

    Read the article

  • Moving a lot of small files between servers using rsync

    - by Adirael
    Hello guys, I'm moving a lot of files (about 2 millions) between two servers on different locations using rsync over ssh, it seems to work fine but I just realised I'm losing some files on the process. I got server 1, with the original data, and server 2, with the copy. Server 1 runs CentOS 5 and Server 2 runs on Ubuntu 10. I'm doing the transfer on the Server's 2 command line like this: rsync -e ssh -avzn usr@server1:/remote/path /local/path The first file movement I did using tar, but I didn't though of piping it through ssh and it failed cause the disk on server 1 was almost full, so I transfered it anyways (it was about 200GB) and got about 80% of the files. Then I piped another tar with the rest of the files (they're in folders, I got 100 folders with about 30 subfolders each, with files inside) and now I got everything on server 2. I wanted to be sure, so I my two options are getting the md5sum of all the files and check them or running an rsync on server 2 against server 1, that's what I did. It got some missing stuff and now it says there's nothing more to do (DRY RUN). But I got at least two files that are missing inside a subfolder. I ran that same rsync on that folder, but still dry run. Am I doing something wrong? Thanks, and sorry for the wall of text.

    Read the article

  • Exchange 2010 update timezone of all calendar items

    - by Andrew
    We are currently operating Exchange 2010 server with Outlook 2010 clients on a ship. We have just changed timezones for the first time in quite a while today. Is there any way to rebase all the calendars and/or update all the calendar items to the new timezone at the same time? I have looked at the following tools already. Microsoft Exchange Calendar Update Configuration Tool - http://www.microsoft.com/en-us/download/details.aspx?id=6266 (Doesn't support exchange 2010) Time Zone Data Update Tool for Microsoft Office Outlook - http://www.microsoft.com/en-us/download/details.aspx?id=17291 The Time Zone Data Update Tool for Microsoft Office Outlook does work for individual users, but has some serious downsides. Including each user needs to run it (approx 400 users), and also it only seems to work on the default account in Outlook 2010, a lot of our users have role accounts as well that we would need to run the tool on. The only way I can find to get this tool to run on the role accounts is to make the role account the default account in outlook, and that in itself is quiet an involved process especially if you have 2 or 3 role accounts. So is there a way to just change all calendar items on our Exchange server to a different timezone in one go? We are a little unique in terms of the whole organisation can change timezones over night, meeting rooms and all, but surely a product as advanced as Exchange 2010 allows us to do what we need.

    Read the article

  • Attempted hack on VPS, how to protect in future, what were they trying to do?

    - by Moin Zaman
    UPDATE: They're still here. Help me stop or trap them! Hi SF'ers, I've just had someone hack one of my clients sites. They managed to get to change a file so that the checkout page on the site writes payment information to a text file. Fortunately or unfortunately they stuffed up, the had a typo in the code, which broke the site so I came to know about it straight away. I have some inkling as to how they managed to do this: My website CMS has a File upload area where you can upload images and files to be used within the website. The uploads are limited to 2 folders. I found two suspicious files in these folders and on examining the contents it looks like these files allow the hacker to view the server's filesystem and upload their own files, modify files and even change registry keys?! I've deleted some files, and changed passwords and am in the process of trying to secure the CMS and limit file uploads by extensions. Anything else you guys can suggest I do to try and find out more details about how they got in and what else I can do to prevent this in future?

    Read the article

< Previous Page | 700 701 702 703 704 705 706 707 708 709 710 711  | Next Page >