Search Results

Search found 21131 results on 846 pages for 'binary log'.

Page 535/846 | < Previous Page | 531 532 533 534 535 536 537 538 539 540 541 542  | Next Page >

  • Can I recover a nano process from a previous terminal?

    - by davidparks21
    My system crashed while I was in a nano session with unsaved changes. When I log back in via SSH I see the nano process still running when I do a ps. davidparks21@devdb1:/opt/frugg_batch$ ps -ef | grep nano 1001 31714 29481 0 18:32 pts/0 00:00:00 nano frugg_batch_processing 1001 31905 31759 0 19:16 pts/1 00:00:00 grep --color=auto nano davidparks21@devdb1:/opt/frugg_batch$ Is there a way I can bring the nano process back under my control in the new terminal? Or any way to force it to save remotely (from my new terminal)?

    Read the article

  • Reverse lookup SERVFAIL

    - by Quan Tran
    I just set up a DNS server and a web server using Virtualbox. The IP address of the DNS server is 192.168.56.101 and the web server 192.168.56.102. Here are my configuration files for the DNS server: named.conf: // // named.conf // // Provided by Red Hat bind package to configure the ISC BIND named(8) DNS // server as a caching only nameserver (as a localhost DNS resolver only). // // See /usr/share/doc/bind*/sample/ for example named configuration files. // options { directory "/var/named"; dump-file "/var/named/data/cache_dump.db"; statistics-file "/var/named/data/named_stats.txt"; memstatistics-file "/var/named/data/named_mem_stats.txt"; //query-source address * port 53; //forward first; forwarders { 8.8.8.8; 8.8.4.4; }; listen-on port 53 { 127.0.0.1; 192.168.56.0/24; }; allow-query { localhost; 192.168.56.0/24; }; recursion yes; dnssec-enable yes; dnssec-validation yes; dnssec-lookaside auto; /* Path to ISC DLV key */ bindkeys-file "/etc/named.iscdlv.key"; managed-keys-directory "/var/named/dynamic"; }; logging { channel default_debug { file "data/named.run"; severity debug 10; print-category yes; print-time yes; print-severity yes; }; }; zone "quantran.com" in { type master; file "named.quantran.com"; }; zone "56.168.192.in-addr.arpa" in { type master; file "named.192.168.56"; allow-update { none; }; }; include "/etc/named.rfc1912.zones"; include "/etc/named.root.key"; named.quantran.com: $TTL 86400 quantran.com. IN SOA dns1.quantran.com. root.quantran.com. ( 100 ; serial 3600 ; refresh 600 ; retry 604800 ; expire 86400 ) IN NS dns1.quantran.com. dns1.quantran.com. IN A 192.168.56.101 www.quantran.com. IN A 192.168.56.102 named.192.168.56: $TTL 86400 $ORIGIN 56.168.192.in-addr.arpa. @ IN SOA dns1.quantran.com. root.quantran.com. ( 100 ; serial 3600 ; refresh 600 ; retry 604800 ; expire 86400 ) ; minimum IN NS dns1.quantran.com. 101.56.168.192.in-addr.arpa. IN PTR dns1.quantran.com. 102 IN PTR www.quantran.com. When I try a normal lookup from the host (I configured so that the only nameserver the host uses is the DNS server 192.168.56.101): quan@quantran:~$ host www.quantran.com www.quantran.com has address 192.168.56.102 quan@quantran:~$ host dns1.quantran.com dns1.quantran.com has address 192.168.56.101 But when I try a reverse lookup: quan@quantran:~$ host -v 192.168.56.101 192.168.56.101 Trying "101.56.168.192.in-addr.arpa" Using domain server: Name: 192.168.56.101 Address: 192.168.56.101#53 Aliases: Host 101.56.168.192.in-addr.arpa not found: 2(SERVFAIL) Received 45 bytes from 192.168.56.101#53 in 0 ms quan@quantran:~$ host -v 192.168.56.102 192.168.56.101 Trying "102.56.168.192.in-addr.arpa" Using domain server: Name: 192.168.56.101 Address: 192.168.56.101#53 Aliases: Host 102.56.168.192.in-addr.arpa not found: 2(SERVFAIL) Received 45 bytes from 192.168.56.101#53 in 0 ms So why can't I perform a reverse lookup? Anything wrong with the zone configuration files? Thanks in advance :) Oh, here is the output from the log file /var/named/data/named.run when I perform the reverse lookup: quan@quantran:~$ host 192.168.56.102 192.168.56.101 Using domain server: Name: 192.168.56.101 Address: 192.168.56.101#53 Aliases: Host 102.56.168.192.in-addr.arpa not found: 2(SERVFAIL) /var/named/data/named.run: 02-Jun-2014 15:18:11.950 client: debug 3: client 192.168.56.1#51786: UDP request 02-Jun-2014 15:18:11.950 client: debug 5: client 192.168.56.1#51786: using view '_default' 02-Jun-2014 15:18:11.950 security: debug 3: client 192.168.56.1#51786: request is not signed 02-Jun-2014 15:18:11.950 security: debug 3: client 192.168.56.1#51786: recursion available 02-Jun-2014 15:18:11.950 client: debug 3: client 192.168.56.1#51786: query 02-Jun-2014 15:18:11.950 client: debug 10: client 192.168.56.1#51786: ns_client_attach: ref = 1 02-Jun-2014 15:18:11.950 query-errors: debug 1: client 192.168.56.1#51786: query failed (SERVFAIL) for 102.56.168.192.in-addr.arpa/IN/PTR at query.c:5428 02-Jun-2014 15:18:11.950 client: debug 3: client 192.168.56.1#51786: error 02-Jun-2014 15:18:11.950 client: debug 3: client 192.168.56.1#51786: send 02-Jun-2014 15:18:11.950 client: debug 3: client 192.168.56.1#51786: sendto 02-Jun-2014 15:18:11.951 client: debug 3: client 192.168.56.1#51786: senddone 02-Jun-2014 15:18:11.951 client: debug 3: client 192.168.56.1#51786: next 02-Jun-2014 15:18:11.951 client: debug 10: client 192.168.56.1#51786: ns_client_detach: ref = 0 02-Jun-2014 15:18:11.951 client: debug 3: client 192.168.56.1#51786: endrequest 02-Jun-2014 15:18:11.951 client: debug 3: client @0xb537e008: udprecv Also, I made some changes to the log section in named.conf.

    Read the article

  • apache and ajp performance

    - by user12145
    I have an apache sitting in front of two tomcat app servers(one on the same physical server, the other on a different one) that does time consuming work(0.5 sec to 10sec per request). The apache http server is getting killed by an average of 1 to 2 concurrent requests per second. both Server spec is about 2GB of RAM. Is there a way to optimize apache to handle the load? any advise is welcome. BalancerMember ajp://localhost:8009/whoisserver BalancerMember ajp://XXX.XX.XXX.XX:8009/whoisserver I keep getting the following in apache2.2 log: [Mon Dec 28 00:31:02 2009] [error] ajp_read_header: ajp_ilink_receive failed [Mon Dec 28 00:31:02 2009] [error] (120006)APR does not understand this error code: proxy: read response failed from 127.0.0.1:8009 (localhost)

    Read the article

  • How to redirect (or Alias) jump page with Apache

    - by Meltemi
    I'm not an Apache expert but need to make a small change to a web server. We are introducing a "jump page" URL that is different from a primary URL (for tracking reasons). /productA/index.html /productA/jump_index.html Basically i want to log that jump_index.html was requested and then return index.html. I don't want the client to wait 8 seconds or so for a redirect. How should we be handling this? Simply symlink (or alias) the file in the filesystem? Use mod_alias Alias Match (if so how exactly)? something better still?

    Read the article

  • Unable to restart MySQL server on CENTOS 6.5 x86_64 kvm – server (WHM/cPanel)

    - by Kevin S
    I am not able to restart MySQL server on CENTOS 6.5 x86_64 kvm – server (WHM/cPanel). I am getting following error while trying to restart the MySQL server. Waiting for mysql to restart...............finished. mysqld_safe (/bin/sh /usr/bin/mysqld_safe --datadir=/var/lib/mysql --pid-file=/var/lib/mysql/server.domain.net.pid) running as root with PID 4227 (process table check method) mysqld (/usr/sbin/mysqld --basedir=/ --datadir=/var/lib/mysql --user=mysql --log-error=/var/lib/mysql/server.domain.net.err --open-files-limit=4096 --pid-file=/var/lib/mysql/server.domain.net.pid) running as mysql with PID 4349 (pidfile check method) mysql has failed, please contact the sysadmin (result was "mysql is not running"). I even restarted the server and tried again but same issue.

    Read the article

  • Ubuntu and racadm

    - by lmqcn
    I recently purchased a used poweredge 1850 server and it came with a DRAC card. After wiping the HDD and installing ubuntu server 12.04.3 LTS amd64 on it, I am now trying to gain access to the DRAC which I believe is version 4. I have properly configured the DRAC to use it's own IP on my LAN and when I point my browser to the IP address, I am greeted with the DRAC login page (it has the dell logo and everything). However, after trying the credentials of root/calvin, I was denied access. So I think that the previous owners had set their own password. After doing some reading, it appears that I can reset the credentials to the default using racadm config -g cfgUserAdmin -o cfgUserAdminPassword -i 1 newpassword but upon entering the command, I get this error: bash: /usr/sbin/racadm: No such file or directory This holds true even if I run sudo su prior to running the racadm command. If, however, I run sudo racadm config -g cfgUserAdmin -o cfgUserAdminPassword -i 1 newpassword there are no errors. Yet, when I try to log into the DRAC via the web interface using the credentials of root/newpassword I am still not granted access. I installed the dell utilities via the guide at https://wiki.ubuntu.com/HardwareSupportMachinesServersDellNotes. I first tried to install the 64 bit version that is on the dell repositories, but after that was unsuccessful, I just followed the guide verbatim. No errors were produced in either case. I even followed the information at the bottom of the guide by executing sudo pppd /dev/ttyS1 1382400 crtscts noipdefault noauth lock persist connect 'chat -v "" CLIENT CLIENTSERVER "\\c"' but obviously, replacing the /dev/ttyS1 with the correct information for my system. ls -l /usr/sbin/ | grep racadm yields -rwxr-xr-x 1 root root 87930 Sep 16 04:03 racadm I have tried these credentials after each attempt of changing the password: root/calvin root/newpassword admin/calvin admin/newpassword All have been unsuccessful. What is the next course of action that I should take?

    Read the article

  • Prevent IIS7 HTTPS from binding to all SSL IP addresses

    - by robpaveza
    I've had this interesting problem with IIS7. I have a number of HTTPS sites in IIS7. That hasn't been a problem, until I wanted to go and set up VisualSVN Server using an SSL certificate. The installer had trouble starting the service. When I looked in the event log, the error was that "the file is already in use by another process." I figured that the "file" was really a socket, and checked with netstat - even though IIS was only bound to three specific IP addresses (.160, .156, and .168) with port 443, it was consuming *:443. I could stop the World Wide Web Publishing Service, start VisualSVN, and then start IIS, but then none of my SSL servers would start. Any helpful hints about how I could make IIS not try to default-bind to *:443? Thanks!!

    Read the article

  • Can't communicate with Primary DNS Server

    - by horsley
    A computer, with Windows 7, can't access any website by domain suddenly. Whether this computer use a wired link or connect to the WLAN, The fault persists IP and DNS obtained automatically, and seems normal (ipconfig /all return the correct info) I can visit websites by using HTTP proxy The DNS server is available, other computer in my room works properly. I can ping myself, the gateway and any other IP, but domains I can use nslookup and obtain the correct IP info There are some error information in the event log about dns client events explaining the client can not verify the DNS server available Windows network diagnosis explain that Windows can't communicate with the device or resource (Primary DNS Server) I guess the dns client should be blame. I tried to do the following things but the fault persist. Reinstall the driver of network adapter Reset TCP/IP (netsh int ip reset) Reset Winsock (netsh winsock reset) Reset LSP I don't want to reinstall the whole os, what should I do?

    Read the article

  • How to stop Bash appending history

    - by Craig
    I am having a lot of trouble setting up the terminal history of Bash the way I want. I would like to have no duplicate entries and if I enter a command I want it saved and the duplicates above removed. The problem is the history command shows me it is functioning the way I want however once I log out the duplicates come back again. I believe it is just appending the history to the existing one. I have these lines in my .bashrc file (~/.bashrc) HISTCONTROL=ignoreboth:erasedups shopt -u histappend I have even tried uncommenting shopt but it still appends the history on logout. How can I have the history be exactly how it is before I logout?

    Read the article

  • Unmount Mass Storage USB Device from the Command Line in Linux

    - by Casey
    I've searched high and low, and can't figure this one out. I have a older Olympus Camera (2001 or so). When I plug in the USB connection, I get the following log output: $ dmesg | grep sd [20047.625076] sd 21:0:0:0: Attached scsi generic sg7 type 0 [20047.627922] sd 21:0:0:0: [sdg] Attached SCSI removable disk Secondly, the drive is not mounted in the FS, but when I run gphoto2 I get the following error: $ gphoto2 --list-config *** Error *** An error occurred in the io-library ('Could not lock the device'): Camera is already in use. *** Error (-60: 'Could not lock the device') *** What command will unmount the drive. For example in Nautilus, I can right click and select "Safely Remove Device". After doing that, the /dev/sg7 and /dev/sdg devices are removed. Some things I've tried already are sdparm and sg3_utils, however I am unfamiliar with them, so it's possible I just didn't find the right command.

    Read the article

  • Postfix smtp error 450 (failed to add recipient)

    - by culter
    I have debian server with postfix and roundcube. After an attack we are on 2 blacklists, but I don't think that this is the main problem. I can't send mail to any address. I tried to find the cause...I checked var/spool/postfix/etc/resolv.conf and resolv.etc and they're the same with this content: nameserver 127.0.0.1 nameserver localhost In var/log/mail.err I found: cyrus/imap[25452]: DBERROR: opening /var/lib/cyrus/user/m/[email protected]: cyrusdb error cyrus/imap[25452]: DBERROR: skiplist recovery /var/lib/cyrus/user/m/[email protected]: ADD at 1FC0 exists When I try to send email from roundcube, I get the message from title. When I send it within opera or any other mail client, It gives nothing, but email is'nt sended. Thank you for any advice.

    Read the article

  • CentOS PAM+LDAP login and host attribute

    - by pianisteg
    My system is CentOS 6.3, openldap is configured well, PAM authorization works fine. But after turning pam_check_host_attr to yes, all LDAP-auths fail with message "Access denied for this host". hostname on the server returns correct value, the same value is listed in user's profile. "pam_check_host_attr no" works fine and allows everyone with correct uid/password a piece of /var/log/secure: Sep 26 05:33:01 ldap sshd[1588]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=my-host user=my-username Sep 26 05:33:01 ldap sshd[1588]: Failed password for my-username from 77.AA.BB.CC port 58528 ssh2 Sep 26 05:33:01 ldap sshd[1589]: fatal: Access denied for user my-username by PAM account configuration Another two servers (CentOS 5.7 Debian) authorizes on this LDAP server correctly. Even with pam_check_host_attr yes! I didn't edit /etc/security/access.conf, it is empty, only default comments. I don't know what to do! How to fix this?

    Read the article

  • Can't send emails through sendmail, error occured

    - by skomak
    Hi, I have sendmail MTA and i use pear:Mail class to send mails through remote sendmail server. Everything was fine till yesterday. Probably nothing changes was made in configs. At maillog i can see: May 6 12:58:55 xxx sendmail[25903]: STARTTLS=server, relay=hostxxxx.static.xx.xx.pl [85.x.x.x], version=TLSv1/SSLv3, verify=NO, cipher=DHE-RSA-AES256-SHA, bits=256/256 May 6 12:58:56 xxx sendmail[25903]: o46AwtqE025903: hostxxxx.static.xx.xx.pl [85.x.x.x] did not issue MAIL/EXPN/VRFY/ETRN during connection to MTA2 and in /var/log/messages: May 6 13:00:17 lilia sendmail[27193]: realm changed: authentication aborted I use ldap to authenticate users but i used the same script to check mailing on another server and it works there good, only this server behave weird. Packets are deliverd to sendmail server i can see it in tcpdump, but there is smaller packets than on other server which sends emails. Could you tell me how can i check what is wrong with that? D.S.

    Read the article

  • Ubuntu security with services running from /opt

    - by thejartender
    It took me a while to understand what's going on here (I think), but can someone explain to me if there are security risks with regards to my logic of what's going on here as I am trying to set up a home web server as a developer with some good Linux knowledge? Ubuntu is not like other systems, as it has restricted the root user account. You can not log in as root or su to root. This was a problem for me as I have had to install numerous applications and services to /opt as per user documentation (XAMPPfor Linux is a good example). The problem here is that this directory is owned by root:root. I notice that my admin user account does not belong to root group through the following command: groups username so my understanding is that even though the files and services that I place in /opt belong to root, executing them by means of sudo (as required) does not mean that they are run as root? I imagine that the sudo command is hidden somewhere under belonging to the root user and has a 775 permission? So the question I have is if running a service like Tomcat, Apcahe, etc exposes my system like on other systems? Obviously I need to secure these in configurations, but isn't the golden rule to never run something as root? What happens if I have multiple services running under same user/group with regards to a compromised server?

    Read the article

  • Database checksum features - redundant? useful?

    - by Eloff
    Just about every mainstream DB has a feature to calculate checksums per page, per sector, or per record. Now for a DB that does full recover after any crash, like PostgreSQL, is a checksum even useful? There will be no data loss as long as the xlog is ok, no matter what kind of corruption happened to the data itself, as the redo log is replayed every committed transaction will be restored. So checksums are useless on restore. Doesn't the filesystem or disk keep checksums anyway to detect corruption? So unless the checksum is per record, all it does is tell you there is corruption - which the OS should be yelling at you the minute you try to read it - so useless in operation? I can't imagine how a checksum can be helpful in any sane database - but since they all use them - I'd say that's just failure of imagination on my part. So how is it useful?

    Read the article

  • Single sign-on for SharePoint to MySite?

    - by Chris W
    I've got a fairly simple SharePoint 2010 farm set up: 2 WFE servers with Network Load Balancing hosting the main portal site. As per Microsoft's best practice recommendations I've set up My Sites in a separate web application. As some of the user base are not using domain joined PCs they have to login once for the portal (http://portal) and then again when the access My Sites since they're crossing in to a separate web application on a separate host (http://mysite). Portal & MySite are both hosted on the same physical WFE servers. Is there an easy way to set up some thing to stop this happening and just have them login once? I understand that there's plans for us to deploy ISA in the not too distant future - could we use ISA to manage authentication to the two sites so that the users only need to log in once?

    Read the article

  • Running PHP 5.2 FastCGI + Apache on CentOS 5 issue

    - by Goran
    I am trying to setup 2 versions of PHP on Centos 5.9 using this tutorial: http://linuxplayer.org/2011/05/intall-multiple-version-of-php-on-one-server. I have followed I have installed default 5.4.19, and I was trying to setup another 5.2.17 PHP version to be run with Fast CGI and I followed the second part completely. However, when I try to run http://web2.example.com it returns 500 error message. In the apache log there are only 2 lines that repeat: [notice] mod_fcgid: call /var/www/web2/index.php with wrapper /usr/local/php52/bin/fcgiwrapper.sh and [notice] mod_fcgid: process /var/www/web2/index.php(25250) exit(server exited), terminated by calling exit(), return code: 255 Please note that I had to add .php at the and of the FCGIWrapper because apache would not start without it: FCGIWrapper /usr/local/php52/bin/fcgiwrapper.sh .php Also please note that http://web1.example.com with PHP 5.4.19 is working absolutely fine. Please help. Thank you very much in advance.

    Read the article

  • How many of you *really* surf around without JavaScript enabled? [closed]

    - by Stephen
    I've decided to rephrase the question. After some deliberation on Meta, I've realized that my question needs to be a bit more focused. The question: Should we (web developers) continue to spend effort progressively enhancing our web applications with JavaScript, ensuring that features gracefully degrade, thereby ensuring accessibility? Or should we spend that time focused on new features or other areas of development? The subtext of that question would be: How many of our customers/clients/users utilize our websites or applications with JavaScript disabled? Do you have any projects with requirements that specifically demand JavaScript functionality (almost all of mine do), and do those requirements also demand graceful degradation? For the sake of asking this question, I pulled up programmers.stackexchange.com without JavaScript enabled, and I was greeted with this message: "Programmers - Stack Exchange works best with JavaScript enabled". It was difficult to log in, albeit the site seemed to generally work okay. (I wasn't able to vote up any questions.) I think this is a satisfactory approach to development. Imagine the effort involved in making all of the site's features work with plain old HTML and server-side logic. OTOH, I wonder how many users have been alienated by this approach. We've all been trained (at least the good developers among us) to use progressive enhancement and to ensure our web applications' dynamic features degrade gracefully. Is this progressive enhancement just pissing into the wind, or do some of our customers actually utilize certain web services without JavaScript enabled? I mean, like really, not figuratively or presumptuously.

    Read the article

  • Sweet and Sour Source Control

    - by Tony Davis
    Most database developers don't use Source Control. A recent anonymous poll on SQL Server Central asked its readers "Which Version Control system do you currently use to store you database scripts?" The winner, with almost 30% of the vote was...none: "We don't use source control for database scripts". In second place with almost 28% of the vote was Microsoft's VSS. VSS? Given its reputation for being buggy, unstable and lacking most of the basic features required of a proper source control system, answering VSS is really just another way of saying "I don't use Source Control". At first glance, it's a surprising thought. You wonder how database developers can work in a team and find out what changed, when the system worked before but is now broken; to work out what happened to their changes that now seem to have vanished; to roll-back a mistake quickly so that the rest of the team have a functioning build; to find instantly whether a suspect change has been deployed to production. Unfortunately, the survey didn't ask about the scale of the database development, and correlate the two questions. If there is only one database developer within a schema, who has an automated approach to regular generation of build scripts, then the need for a formal source control system is questionable. After all, a database stores far more about its metadata than a traditional compiled application. However, what is meat for a small development is poison for a team-based development. Here, we need a form of Source Control that can reconcile simultaneous changes, store the history of changes, derive versions and builds and that can cope with forks and merges. The problem comes when one borrows a solution that was designed for conventional programming. A database is not thought of as a "file", but a vast, interdependent and intricate matrix of tables, indexes, constraints, triggers, enumerations, static data and so on, all subtly interconnected. It is an awkward fit. Subversion with its support for merges and forks, and the tolerance of different work practices, can be made to work well, if used carefully. It has a standards-based architecture that allows it to be used on all platforms such as Windows Mac, and Linux. In the words of Erland Sommerskog, developers should "just do it". What's in a database is akin to a "binary file", and the developer must work only from the file. You check out the file, edit it, and save it to disk to compile it. Dependencies are validated at this point and if you've broken anything (e.g. you renamed a column and broke all the objects that reference the column), you'll find out about it right away, and you'll be forced to fix it. Nevertheless, for many this is an alien way of working with SQL Server. Subversion is the powerhouse, not the GUI. It doesn't work seamlessly with your existing IDE, and that usually means SSMS. So the question then becomes more subtle. Would developers be less reluctant to use a fully-featured source (revision) control system for a team database development if they had a turn-key, reliable system that fitted in with their existing work-practices? I'd love to hear what you think. Cheers, Tony.

    Read the article

  • how to make run cron on OSX 10.6.2?

    - by Radek
    Note: this question is not about how to edit cron tab but how to make cron work I edited my cron using env EDITOR=joe crontab -e I entered 1 * * * * echo 'test' > /Users/radek/Backup/rationalvmware/test.txt and it does nothing although the cron is set up correctly. Checked via Cronnix and viewed the cron in /var/cron/tabs. Editing crontab using Cronnix gives me the same results. If I run echo 'test' > /Users/radek/Backup/rationalvmware/test.txt manually it creates a files as expected so I assume that the command I provide to cron is correct one. Is there anything special I have to do to make cron work on OSX? How can I check it the the cron is running. What's the equivalent of /var/log/messages on OSX? I can see in messages on SuSE that cron works.

    Read the article

  • Can't boot into windows7/ubuntu 12.04 after running boot-repair

    - by Rini
    I have installed Ubuntu 12.04 on my preinstalled windows 7 Sony vaio E series laptop following instructions here: http://www.linuxbsdos.com/2012/05/17/how-to-dual-boot-ubuntu-12-04-and-windows-7/ Everything went well and I am able to boot in to windows after complete installation of Ubuntu. Now following instructions on web I tried to add Ubuntu to my BIOS using Easy BCD (but forget to add windows 7 entry). As a result, I loose windows 7 OS and can't boot in to either OS then I successfully repaired windows 7 using recovery CD. Now my problem is that I can't reinstall Ubuntu 12.04 using Live CD it halts every time before disk partition step giving error. "ubi-partman crashed". "ubi-partman failed with exit code 141. further information may be found in /var/log/syslog. Do you want to try running this step again before continuing? If you do not, your installation may fail entirely or may be broken." and, any choice to continue will result in the same error. After that following some post solutions I ran boot-repair commands in terminal ( in Try Ubuntu mode) and got the following URL: http://paste.ubuntu.com/1206434/ Now, after restart I can't boot into either Windows or Ubuntu. Even any attempt to run Windows repair is failed and I got the message : 'No operating System found' I don't know what went wrong after running boot-repair command. Please help in solving this issue. Thanks and Regards, R Shukla

    Read the article

  • Winlogon.exe causes C++ runtime error

    - by Evan
    Recently I've become unable to log into my Dell Precision M2400. It uses the Dell Controlpoint login GUI instead of the typical windows one, and has now started giving me a runtime error on winlogon.exe that ends with a c000021a BSOD. I have tried running through safe mode and a restore to the last known good setting with no success. Unfortunately by BIOS password is locked and the one IT guy with the password is on vacation and unreachable until after I leave for a business trip. Is there anyway to bypass the Dell logon screen and get to the default windows one? Thanks.

    Read the article

  • Unable to create system partition or locate existing system partition during Windows-7 installation

    - by glenneroo
    I have Windows XP 32-bit installed on an ASUS A8N-SLI Deluxe with 2x 500gb drives in RAID1 using the NV RAID controller. On this there are 3 partitions (XP, XP backup and DATA) There are also 4x 500gb drives in RAID10 using the Silicon Image 3114R RAID controller. I just purchased a Windows 7 64-bit as an ISO download upgrade version which I promptly burned to DVD and attempted to perform an upgrade installation. Here is the error message I am getting: Firstly, where are these "Setup log files" located? Second, does this mean I need to find compatible (64-bit?) drivers for the Mainboard and put them on floppy? EDIT: As suggested on another forum, I tried downloading the nVidia Mainboard RAID drivers for Windows 2003 64-bit. I loaded the drivers successfully using the Load Driver button, but pressing NEXT still returns this error.

    Read the article

  • libgdx intersection problem between rectangle and circle

    - by Chris
    My collision detection in libgdx is somehow buggy. player.png is 20*80px and ball.png 25*25px. Code: @Override public void create() { // ... batch = new SpriteBatch(); playerTex = new Texture(Gdx.files.internal("data/player.png")); ballTex = new Texture(Gdx.files.internal("data/ball.png")); player = new Rectangle(); player.width = 20; player.height = 80; player.x = Gdx.graphics.getWidth() - player.width - 10; player.y = 300; ball = new Circle(); ball.x = Gdx.graphics.getWidth() / 2; ball.y = Gdx.graphics.getHeight() / 2; ball.radius = ballTex.getWidth() / 2; } @Override public void render() { Gdx.gl.glClearColor(1, 1, 1, 1); Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT); camera.update(); // draw player, ball batch.setProjectionMatrix(camera.combined); batch.begin(); batch.draw(ballTex, ball.x, ball.y); batch.draw(playerTex, player.x, player.y); batch.end(); // update player position if(Gdx.input.isKeyPressed(Keys.DOWN)) player.y -= 250 * Gdx.graphics.getDeltaTime(); if(Gdx.input.isKeyPressed(Keys.UP)) player.y += 250 * Gdx.graphics.getDeltaTime(); if(Gdx.input.isKeyPressed(Keys.LEFT)) player.x -= 250 * Gdx.graphics.getDeltaTime(); if(Gdx.input.isKeyPressed(Keys.RIGHT)) player.x += 250 * Gdx.graphics.getDeltaTime(); // don't let the player leave the field if(player.y < 0) player.y = 0; if(player.y > 600 - 80) player.y = 600 - 80; // check collision if (Intersector.overlaps(ball, player)) Gdx.app.log("overlaps", "yes"); }

    Read the article

  • Networking not working in Windows 7 - "The account specified for this service is different from the account specified for other services"

    - by tog22
    I have a homebuilt computer with a GA-Z68MA-D2H-B3 motherboard with a Realtek RTL8111E LAN chip. Ethernet was working fine in Windows 7 until I reinstalled this OS, but now it's stopped, while still working in other OSes. I've tried reinstalling drivers from both the Gigabyte and Realtek sites to no avail; I've also plugged in and installed an Asus USB-N13 wifi dongle and this doesn't work oddly. How can I diagnose and fix this issue? In 'Network and Sharing Center' says "The account specified for this service is different from the account specified for other services running in the same process" under the heading 'Unknown'. Following the advice at http://www.sevenforums.com/network-sharing/130159-dependency-service-group-failed-start.html#post1122627 I've gone to 'Control Panel= Admin Tools= Services' and ensured the services listed at that URL are started (one - IIRC 'Come+ Event System' refused to start as user 'Local services' so I've had to set it to log on with the 'Local system account'... I suspect this may be part of the problem).

    Read the article

< Previous Page | 531 532 533 534 535 536 537 538 539 540 541 542  | Next Page >