Search Results

Search found 17625 results on 705 pages for 'techno log'.

Page 458/705 | < Previous Page | 454 455 456 457 458 459 460 461 462 463 464 465  | Next Page >

  • Dom U Installation on Ubuntu 11.10

    - by sridutt
    I am trying to add a DomU Operating system on Ubuntu 11.10. I have successfully installed Xen. Verified with xm info virsh-version which returns: Compiled against library: libvir 0.9.2 Using library: libvir 0.9.2 Using API: Xen 3.0.1 Running hypervisor: Xen 4.1. Now when I tried to install Dom0 it said: unable to connect to 'localhost:8000': , in VMM. So, I followed this bug link. I could now start adding DomU. When adding DomU, in last stage, it gives the following error: Unable to complete install: 'POST operation failed: xend_post: error from xen daemon: (xend.err "Error creating domain: device model '/usr/lib/xen/bin/qemu-dm' not found")' Traceback (most recent call last): File "/usr/share/virt-manager/virtManager/asyncjob.py", line 44, in cb_wrapper callback(asyncjob, *args, **kwargs) File "/usr/share/virt-manager/virtManager/create.py", line 1899, in do_install guest.start_install(False, meter=meter) File "/usr/lib/pymodules/python2.7/virtinst/Guest.py", line 1223, in start_install noboot) File "/usr/lib/pymodules/python2.7/virtinst/Guest.py", line 1291, in _create_guest dom = self.conn.createLinux(start_xml or final_xml, 0) File "/usr/lib/python2.7/dist-packages/libvirt.py", line 1686, in createLinux if ret is None:raise libvirtError('virDomainCreateLinux() failed', conn=self) libvirtError: POST operation failed: xend_post: error from xen daemon: (xend.err "Error creating domain: device model '/usr/lib/xen/bin/qemu-dm' not found") I tried following this bug link that said, the bug is solved in the below package. When I run ./configure in this, I am getting an error: checking for LIBXML... no checking libxml2 xml2-config >= 2.6.0 ... configure: error: Could not find libxml2 anywhere (see config.log for details). What is the problem?

    Read the article

  • Why is my HDD going back from standy?

    - by Pablo
    My hard drives, connected to Ubuntu server are producing the following log every exactly 5 minutes. Nov 1 14:10:50 localhost kernel: [ 1602.884936] ata2.00: hard resetting link Nov 1 14:10:51 localhost kernel: [ 1603.226804] ata2.01: hard resetting link Nov 1 14:10:52 localhost kernel: [ 1604.274533] ata2.00: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Nov 1 14:10:52 localhost kernel: [ 1604.274548] ata2.01: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Nov 1 14:10:52 localhost kernel: [ 1604.356669] ata2.00: configured for UDMA/133 Nov 1 14:10:52 localhost kernel: [ 1604.375247] ata2.01: configured for UDMA/133 Nov 1 14:10:52 localhost kernel: [ 1604.375265] ata2: EH complete I don't think this is related to hard drive failure, because it happens for ALL hard drives connected and ONLY when I write spindown_time = 12 in /etc/hdparm.conf. The reason I add this value is to put disks into standby mode after 60 seconds, which is happening after that period (checked with hdparm -C). The first clue I thought that smartd was running and spinning the drive. However, I couldn't find it in ps -aux | grep smart. Additionally, iostat does show that nobody accessed those drives, since Blk_read, Blk_wrtn remain unchanged. I also killed all processes that may be doing something with hdd(eg SAMBA). So I guess the problem is solely with hdparm... I have no more clue where that 5 minute value hides.

    Read the article

  • Simple server status page hosted externally available for users

    - by Chris
    I am looking for any kind of script - can be asp or php or any other web language - that gives me the ability to log outages and the current state of the network for our organisation. This would be similar to any major Telco's "Network Status" page, but I just want to tell the user's out there if the systems are up and running and have a history of recent outages. This would be for our remote user's so they could go to a webpage (externally hosted from our main site) and see that we are currently having problems with our network. What are other people out there using?

    Read the article

  • Howto dianostic problem in MySQL Cluster?

    - by maj
    Hi, I have set up a MySQL cluster following exactly this howto. Page 1 is completed, but the problem I can see the nodes in ndb_mgm for a little while, and then I get ndb_mgm> show; Cluster Configuration --------------------- [ndbd(NDB)] 2 node(s) id=2 (not connected, accepting connect from 192.168.0.101) id=3 (not connected, accepting connect from 192.168.0.102) [ndb_mgmd(MGM)] 1 node(s) id=1 @192.168.0.103 (Version: 5.1.45) [mysqld(API)] 2 node(s) id=4 (not connected, accepting connect from any host) id=5 (not connected, accepting connect from any host) ndb_mgm> So the questions is; How do I diagnostic this problem? Does there exist log files I can look in?

    Read the article

  • Dual displays not working - NVidia - Ubuntu 12.04 - Second Monitor - Two Screens

    - by user75105
    Graphics Card: NVidia 460 GTX. Driver: NVIDIA accelerated graphics driver (version current) I have one DVI monitor, an old Dell LCD from 2005, and one VGA monitor, an Asus ML238H from 2010 whose HDMI port broke. The Asus is plugged into my graphics card's primary monitor slot and is the better monitor even though it is VGA but my computer defaults to the Dell. This happens when I boot as well; the loading screens, the motherboard brand image, etc. are all displayed on the Dell monitor until Windows loads. Then both monitors work. The same thing happened when I booted up Ubuntu 12.04 but I did not see the second monitor when the log-in screen popped up, nor did I when I logged in. I went to System Settings/Displays and my Asus monitor is not an option. I clicked Detect Displays and the Asus is not detected. I looked at the other questions regarding NVIDIA drivers and recalled my problems with Ubuntu a few years ago and decided to check the driver. I went to Additional Drivers to install the proprietary driver and it looks like it's installed and active but I'm still having this problem. There is another driver option, the post-release NVIDIA driver, but that does not fix the problem either. Also, under System Details/Graphics the graphics device is listed as Unknown, which might indicate that it is using an open source generic driver and not the proprietary NVidia driver. But under Additional Drivers it says that I am using the NVidia driver. Any help is appreciated.

    Read the article

  • What does it mean to be agile?

    - by JD01
    We have a project that everyone says we will be doing in a agile way but I doubt we have clearly understood what agile is. In previous projects we had planning meetings, then defined the product back log and allocated the work to developers in 2 to 3 week sprints. Every morning we had scrum meetings (which seemed to go on for 1/2 an hour each time) and each developer got on with it after that. Hardly anyone wrote any tests until at the end of sprint and work that was not completed was added on to the next sprint. Developers hardly spoke to each other and there was no TDD involved in development. In fact most developers had a spec at the start and just got on with it for the 2 or 3 weeks the sprint was arranged for. There was hardly any communication with the client/stake holder. QA got involved usually a few months later and by then we found missing requirements which further increased the amount of work we had to do. Clearly there was no feedback loop. So my question is, where did we go wrong and how can I prevent the team from making the same mistakes.

    Read the article

  • 500 internal server error php long running process

    - by Sabirul Mostofa
    I am trying to run a long php process and it ends with the 500 internal server error. It executes fine for about 8 mins. I have rebooted the machine after changing the php settings. PHP Config: max_execution_time: 3600 After around 10 mins ps ax|grep php: 19007 ? S 0:08 /usr/bin/php /home/gypsy/public_html/index.php I have set the ignore_user_abort true. The process gets stuck at 00:08 min and isn't executed further. Apache error log shows the error: Script timed out before returning headers: index.php It seems somehow the max_execution_time isn't working. Any suggestion would be a great help.

    Read the article

  • $DISPLAY dependent gtk themes

    - by Vlad Seghete
    I have a computer at home that I log into remotely. The "monitor" for it is a TV, so I want gtk applications to use a large font and icon theme, which I managed to do by editing the ~/.gtkrc-2.0 file and some other similar stuff. What I want to be able to do is have a separate theme for when I'm logging in remotely. The best way to explain is that I would like my gtk theme choice to be dependent on the X display that the application is started on. For example, if I start something on :0.0 then that is the TV and I want large fonts, but if I start it on localhost:10.0 I want to use a regular size font, because it will get rendered on my laptop screen. The elegant solution would be to have some sort of IF statement in the .gtkrc-2.0 file that checks the $DISPLAY variable and behaves accordingly. The problem is I can't find any documentation on control structures in .gktrc files, or if it's even possible to do that.

    Read the article

  • Remote Desktop Problem on Windows Server 2008 R2

    - by lukiffer
    Revised this question to be more concise, consolidating several revisions. Symptoms: From a domain-member Windows 7 Client: Domain credentials to a domain controller = success Domain credentials to a member server (by hostname or FQDN) = success Domain credentials to a member server (by IP) = fail Local credentials to a member server (by either) = success From a non-domain-member Windows 7 Client: Domain credentials to a domain controller = success Domain credentials to a member server = fail Local credentials to a member server = success (Identical behavior from a Mac RDC 2.1 client) Server Configuration Details: Windows 2008 R2 Datacenter w/ SP1 The domain in question is a subdomain of a Windows 2008 domain (forest root). Root has DCs in both Site A and Site B, subdomain only has DCs in Site B. RDP is operating normally on all root member-servers and DCs. No remote desktop settings are defined by GPOs. Network level authentication is enabled; all clients are compatible and the certificate exchange/SSL handshake completes successfully. Not catching any errors in netlogon log.

    Read the article

  • Why do mapped drives only reappear after logging out and back in, and not after a reboot?

    - by razumny
    I work in a corporate environment, where we use mostly Windows 7 Professional computers, though some legacy applications are still being run on Windows XP. We have security in place on the network not to allow access to network resources to computers that are not members of Active Directory. When logging in, our users get their home folder and a common network drive mapped to H: and F:, respectively. Sometimes, this does not happen, and the drives are not mapped. The solution is to have the user log off, and back in to Windows. If they reboot, the drives remain unmapped. Does anyone know why this may be?

    Read the article

  • How do I change the output line length from the "top" linux command running in batch mode

    - by Tom
    The following command is useful to capture the current processes that are taking up the most CPU in a file: top -c -b -n 1 > top.log The -c flag is particularly useful because it gives you the command line arguments of each process rather than just the process name. The problem is that each line of output is truncated to fit on the current terminal window. This is ok if you can have a wide terminal because you have a lot of the output but if your terminal is only 165 characters wide, you only get 165 characters of information per process and it is often not enough characters to show the full process command. This is a particular problem when the command is executed without a terminal, for example if you do it via a cron job. Does anyone know how to stop top truncating data or force top to display a certain number of characters per line? This is not urgent because there is an alternative method of getting the top 10 CPU using processes: ps -eo pcpu,pmem,user,args | sort -r -k1 | head -n 10

    Read the article

  • How do I turn of "auto-echo" in bash when I 'cd'?

    - by Avery Chan
    I don't know when this started happening but now, every time I cd to a directory it echoes the path right before it changes directories. This happens when I log into a server but doesn't happen on my local machine. The server is running Linux. My local machine is running Mac OS X. I searched the Google as well as looked at the bash man page but I couldn't find anything. My .bashrc/.bash_profile doesn't have anything related to 'cd' (that I know of). How do I modify this "feature"?

    Read the article

  • Debugging COM+ applications

    - by cc0
    I have a number of separate COM+ applications that I have to figure out; The COM+ applications respond to a number of scheduled tasks, and I need to know which COM components within which applications are being used when I execute each of these tasks. It is easy to figure out what (if anything) goes wrong in the event log, but as I am working on testing each components compatibility with the others; I need to know which ones I have actually tested by executing the scheduled tasks. Does anyone have some useful tips here? I've been looking into the sysinternals tools, and specifically processmonitor, but I have not found a way to make it monitor the COM+ applications yet. (I initially started this question here, but realized it's probably more suited for serverfault)

    Read the article

  • Broken screen on laptop?

    - by John
    I recently damaged the screen on my HP Notebook...now my main concern is that I have Roboform installed and on my taskbar. The trouble is, because the screen is totaled, I can't see anything, and I discovered, to my horror, that I haven't backed any of the IDs and passwords up to a text document. Of course, I cannot access any of my content that hosts an ID name and password at all unless I can access Roboform that is installed on the notebook! Now hopefully I have ordered a SATA/USB cable to access the hard drive that I now removed form the notebook will I be able to access roboform that way and ALL the log-in details?

    Read the article

  • ADFS http login failure not re-requesting credentials

    - by Devnull
    We have ADFS working with HTTP (401) login. If a user types their password incorrectly, ADFS barfs and requires that the browser be closed, rather than asking the user for to attempt to log in again. Reprompting for user credentials is the typical behavior with other web servers (even IIS). This appears to be an artifact of setting the HTTP session, but other HTTP-login applications dont behave this way. We are having additional issues now because some users are saving that password, and its causing them account lockouts because the browsers do not realize they need to update saved credentials. Anyone know of a workaround? Wed rather not enable forms login if possible.

    Read the article

  • How to failover to local account on a cisco switch/router if radius server fails?

    - by 3d1l
    I have the following configuration on a switch that I testing for RADIUS authentication: aaa new-model aaa authenticaton login default group radius local aaa authentication enable default group radius enable aaa authorization exec default group radius local enable secret 5 XXXXXXXXX ! username admin secret 5 XXXXXXXXX ! ip radius source-interface FastEthernet0/1 radius-server host XXX.XXX.XXX.XXX auth-port 1812 acct-port 1813 key XXXXXXXXX radius-server retransmit 3 ! line con 0 line vty 5 15 Radius authentication is working just fine but if the server is not available I can not log into the router with the ADMIN account. What's wrong there? Thanks!

    Read the article

  • Reset my windows server 2003

    - by Tim Thoirp
    I was recently given an HP Proliant Server from a friend as a gift. It has Windows Server 2003 installed on it. When I go to boot the system however to log in to Windows Server 2003 it requires an Admin password. I can't figure out the password and my friend doesn't know it either as it has been years since hes used the machine. I don't care about any of the data on the machine I just want to have a new clean version of Windows server 2003 running on it. Any advice would be helpful? And no I don't want to pay for a password cracking tool. Thanks

    Read the article

  • How can I see if apache is overloaded and dropping or not accepting connections?

    - by cat pants
    Basically I just want to see if apache is handling a current level of high traffic or if I need to tune it to handle more connections. (I have found plenty of information on the actual tuning, so no help needed there) I know it has been dropping or not accepting connections earlier today, but not seeing anything in the error logs. Is the expected behavior to throw a 503 in the error log if apache cannot accept more connections? If so, what error logging level do I need in order to see these? What is the correct terminology: dropping connections or not accepting connections? MPM is prefork, OS is Linux, apache version is 2.2.15.

    Read the article

  • Nagios and rrd on a old server

    - by Pier
    I have an old server (P4 based) on which nagios (and all the other tools to monitor) is running. In the last few weeks we are seeing a strange behavior. In the /var/spool/pnp4nagios (where temporary files are stored before getting processed by pnp4nagios daemon) we have many files like perfdata.1274949941-PID-18839 and we get an error in npcd.log: [05-27-2010 11:17:46] NPCD: ThreadCounter 0/15 File is perfdata.1274951306-PID-27849 [05-27-2010 11:17:46] NPCD: File 'perfdata.1274951306-PID-27849' is an already in process PNP file. Leaving it untouched. Sometimes some graph are not drawn. The server is pretty loaded (around 5-6 normally) and i suspect that npcd goes in timeout and leave those files behind. What could I do (apart from change the server)? Few infos about the system: centos 5.5 nagios 3.2.1 pnp4nagios 0.6 (from sources) Thanks

    Read the article

  • Share folder with active directory group permissions

    - by Hihui
    I have a Debian as a member of our AD (which is a 2k3). I want to share 2 folders from our Debian. 1 with full access for everyone, the second only readable by group "ADM", and "PROD". Part of smb.conf: [global] workgroup = MYDOMAIN realm = MYDOMAIN.LOCAL netbios name = SERV-FTP wins server = "IP serv 2k3" security = domain [JUKEBOX] // full access path = /media/JUKEBOX/JUKEBOX comment = sharing writable = yes browsable = yes public = yes read only = no valid users = @ASYLUM\prod_std admin users = @ASYLUM\ADM [SOFTWARE] comment = Software path = /media/JUKEBOX/SOFTWARE valid users = @ASYLUM\prod_adv, @ASYLUM\ADM writable = yes read only = no My log : [2013/10/25 09:24:37.316643, 0] smbd/service.c:1055(make_connection_snum) canonicalize_connect_path failed for service SOFTWARE, path /media/JUKEBOX/SOFTWARE And, from my Windows's client, if i want to access on that folder : Windows can't access to \serv-ftp\software Where is the problem ... ? Thx !

    Read the article

  • I cannot access my flickr account

    - by AtanuCSE
    I was using Google account to log in to my Flickr. After several days, I entered into the Flickr account and found out that Flickr is moving into only Yahoo login. So I tried the Google login and it shows This account is not connected with any Yahoo account. Sign up for new........ or use existing etc... Can't remember the exact words. So I provided my Yahoo mail credentials. Now every time it is giving me a brand new account, rather taking me to my previous Flickr account. I can view the previous account photos, but After going there, it treated me as a outsider. New account showing me that I've not uploaded any photo. What's wrong? How can I connect with my previous account?

    Read the article

  • Barnyard Service - MySQL Error

    - by SLYN
    I installed barnyard2 and saved as a service. When I run service barnyard2 start, Barnyard2 is failed. After I run tail -100 /var/log/messages and I encounter a fault like this. ERROR database: 'mysql' support is not compiled into this build of snort#012 Aug 22 11:52:06 barnyard2[25771]: FATAL ERROR: If this build of barnyard2 was obtained as a binary distribution (e.g., rpm,#012or Windows), then check for alternate builds that contains the necessary#012'mysql' support.#012#012If this build of barnyard2 was compiled by you, then re-run the#012the ./configure script using the '--with-mysql' switch.#012For non-standard installations of a database, the '--with-mysql=DIR'#012syntax may need to be used to specify the base directory of the DB install.#012#012See the database documentation for cursory details (doc/README.database).#012and the URL to the most recent database plugin documentation. Aug 22 11:52:06 barnyard2[25771]: Barnyard2 exiting What sould I do for solving this problem? When I installed Barnyard2, I used these commands: # ./configure --with-mysql --with-mysql-libraries=/usr/lib64/mysql # make ; make install (My System is CentOS 6.5 x86_64.)

    Read the article

  • Bug: unable to handle kernel NULL pointer dereference at

    - by maria
    I have recently installed new system on my disc, Ubuntu 12.04. Installation proceeded without problems, I started installing additional software and put data from other discs. I had already two times bug report, it was quite long, and I have no idea how to acces to log file (which probably is somewhere saved) and since I had to switch off the computer using the button, anything else was possible, here is just a small part of it (what I've noted on paper) could not write bytes: Broken pipe speach dicpatcher disabled: edit etc/default/speach-dispatcher saned disabled: edit .... and than: BUG: unable to handle kernel NULL pointer dereference at 0000009c I've run Memory test in GRUB, everything is fine. First time it occured when I was using rsync, second time when I was trying to install texlive. Should I install whole system once again? Or can it be hardware problem? Or something else? If there is any hardware details which may be relevant, please ask, since I have no idea what is happening, I don't know what kind of information could be useful. Thanks P.S. dmesg output:

    Read the article

  • IIS FTP server not working after purchase of SSL certificate

    - by Chris
    I've been connecting to my web server with active mode in FileZilla with no problems. Over the weekend, an SSL certificate was dropped into a folder that I access with FTP, and which contains files for the website. Now I am receiving a 425 error in active mode on the FTP root, so I can't really do anything but log in. In passive mode, I can connect and move around in the directory tree, but the connection seems shaky. Occasionally I'll time out, and I can't get access at all to the folder containing the SSL certificate. My question is how does the SSL certificate affect my FTP connection (if at all)? Does its presence demand the use of FTP over SSL? Note: As far as I know, the only change which occurred was the placement of the SSL certificate. Firewall settings, FTP client and server settings should all be the same as before, when everything was working.

    Read the article

  • VPN pre-shared key problems

    - by Owl
    I have two vpns set up on a Symantec Gateway Security 320. VPN 1 goes to a Symantec Firewall/VPN 100 to another clinic of ours and every hour they lose connectivity and the error log on the Firewall/VPN100 shows an invalid pre-shared key error, although, both devices show the same pre-shared key entered. VPN 2 goes to our software vendor to use an additional part of our program. I am unable to ping the remote address and so is the other company, but my VPN status shows it is connected. They have told me the pre-shared key seemed to be automatically trying to resubmit itself as if it were incorrect, about every hour even though it is correct. They also told me port80 traffic was closed but I show the HTTP service using 80 redirected to 80 in my firewall settings. Please help.

    Read the article

< Previous Page | 454 455 456 457 458 459 460 461 462 463 464 465  | Next Page >