Search Results

Search found 12582 results on 504 pages for 'remove'.

Page 378/504 | < Previous Page | 374 375 376 377 378 379 380 381 382 383 384 385  | Next Page >

  • GRUB having problems with my sda1 UUID

    - by Igoru
    I was having trouble trying to setup triple boot on my computer... (Take a look at this thread if you think it would help). I ended up by having a GRUB menu that has Ubuntu entries and "Windows" entry, that calls an EasyBCD menu to choose between Windows 7 and XP. Everything would be fine if, only if, GRUB was set up correctly. I can't find why, but it throws me this when I try to open Ubuntu: I've already tried to remove the menu.lst and do a grub-update, and a grub-install too. I tried to create a symlink to /dev/sda1 at /dev/disk/by-uuid/<<uuid that is there>>, just like the other UUIDs that were there... But I couldn't find that symlink at that busybox that opened when it thrown me the error. Any ideas? [UPDATE] This is the GRUB entry with problems: title Ubuntu 9.04, kernel 2.6.28-15-generic uuid b1ed36e5-4d84-4eb8-86ef-6f1135ffc238 kernel /boot/vmlinuz-2.6.28-15-generic root=UUID=b1ed36e5-4d84-4eb8-86ef-6f1135ffc238 ro quiet splash initrd /boot/initrd.img-2.6.28-15-generic quiet And this my /dev/disk/by-uuid folder: 04DCBCFBDCBCE856 - ../../sdb1 (NTFS backup disk) 4434E77734E769FE - ../../sda4 (NTFS WinXP) ACB09F0DB09EDCE0 - ../../sda2 (NTFS Win7) b5311be8-a853-4fdd-aed5-d65974b3c0c4 - ../../sda5 (EXT4 home) C04B-4D97 - ../../sdc (FAT32 live-pendrive from which i'm running) D28447F68447DB9B - ../../sda6 (NTFS files partition) e0e88f38-d815-423a-9d5e-64b9c74a8b92 - ../../sda7 (swap)

    Read the article

  • McAfee ePolicy-Orchestrator (ePO) - policy ownership by groups?

    - by bkr
    Is there a way to grant ownership of an ePO policy to a group? Alternatively, is there a permission that can be set that would allow owners of an ePO policy to add other owners to that policy without making them ePO admin? In the case I'm looking at, ePO is deployed within a large heterogeneous organization with a large amount of delegation in the form of create/modify policy rights to allow multiple IT departments to customize to their needs for their sections of the system tree. The problem is that the policies are owned by the creator of the policy. This causes problems when they leave (staff turnover) or when other people on their teams need the ability to modify the existing policy. Unfortunately, as far as I can see, only someone who is an ePO admin can change the owners. Even the owner of the policy cannot add other owners (unless they are also an ePO admin). Ideally, I should be able to assign ownership of a policy to a group - since that would be easier to manage than me or another admin having to continually fix policy ownership or remove orphaned polices. Even just allowing the owners of the polices to add other owners would be sufficient. How are other people handling policy ownership when dealing with a large amount of delegated control of polices? Is there a way to delegate this out without making users full ePO admins?

    Read the article

  • Random "not accessible" "you might not have permission to use this network resource"

    - by Jim Fred
    A couple of computers, both Win7-64 can connect to shares on a NAS server, at least most of the time. At random intervals, these Win7-64 computers cannot access some shares but can access others on the same NAS. When access is denied, a dialog box appears saying "\\myServer\MyShare02 not accessible...you might not have permission to use this network resource..." Other shares, say \\myServer\MyShare01, ARE accessible from the affected computers and yet other computers CAN access the affected shares. Reboots of the affected computers seem to allow the affected computer to connect to the affected shares - but then, getting a cup of coffee seems to help too. When the problem appears, the network seems to be ok e.g. the affected computers can access other shares on the affected server and can ping etc. Also Other computers can access the affected shares. The NAS server is a NetGear ReadyNas Pro. The problem might be on the NAS side such as a resource limitation but since only 2 Win7-64 PCs seem to be affected the most, the problem could be on the PC side - I'm not sure yet. I of course searched for solutions and found several tips addressing initial connection problems (use correct workgroup name, use IP address instead of server name, remove security restrictions etc) but none of those remedies address the random nature of this problem.

    Read the article

  • Have .cmd in startup wait for system wide startup to run.

    - by Dan
    On a computer I'm not an administrator on, there is a startup file (.cmd file in C:\Documents and Settings\All Users\Start Menu\Programs\Startup) for all users. It does several things I don't like, so I have created my own .cmd file which I have placed in C:\Documents and Settings\[user]\Start Menu\Programs\Startup. I want my code to run after the system wide, because my program first undoes the network mappings the system wide file did, and then replaces it with my own network mappings. How can I make my program wait and start as soon as the system wide one is done? (I cannot remove or edit the system wide file). Thanks EDIT: The time that the system wide file takes to runs varies and I would like my file to run right after. The "If Exists" method seems a little to contingent since the system wide script can change or a file could be moved. I am hoping to give my script to a few of my coworkers, so hoping to have it work without me having to update anything. Also, being a linux guy, I don't know cmd code, so please write out any coding suggestions.

    Read the article

  • Cannot configure hostname keeps on changing after reboot CentOS 6 + nginx [on hold]

    - by The Wolf
    I just finished this tutorial I found online: http://www.unixmen.com/install-lemp-nginx-with-mariadb-and-php-on-centos-6/ Now, I am having trouble making a hostname, you can see the result at: http://www.intodns.com/busilak.com here are my confs /etc/hosts 127.0.0.1 localhost.localdomain localhost localhost4.localdomain4 localhost4 # Auto-generated hostname. Please do not remove this comment. 198.49.66.204 host.busilak.com busilak.com host ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 /etc/sysconfig/network NETWORKING="yes" GATEWAYDEV="venet0" NETWORKING_IPV6="yes" IPV6_DEFAULTDEV="venet0" HOSTNAME="host.busilak.com" /etc/nginx/conf.d/default.conf server { #listen 80; #server_name host.busilak.com; #charset koi8-r; #access_log logs/host.access.log main; location / { root /usr/share/nginx/html; index index.html index.htm; } error_page 404 /404.html; location = /404.html { root /usr/share/nginx/html; } Question: Is there anything I should have done? I just want to use my domain: busilak.com as default domain for my server, such that when I open busilak.com it would point readily to my VPS ip address.

    Read the article

  • Size of modules within initrd

    - by LiKao
    I am currently trying to manually replace the kernel within ubuntu-core on an embedded device with a custom kernel. However when I try to update the initrd my initrd becomes much bigger. Here is what I did: Extract the initrd that was shipped with ubuntu Make a list of all modules within the old initrd get the same modules from the new module director at /lib/modules/new_kernel_version add these modules to the initrd and remove the old ones If I do this my initrd becomes quite bigger than the original one, so I checked the individual modules. I picked the btrfs.ko filesystem driver as an example. So by comparing these two modules I noticed the one I just picked into the initrd was much bigger, which caused the difference in size. -rw-r--r-- 1 root root 999K Nov 14 15:06 btrfs.ko For the btrfs.ko within the shipped initrd. -rw-r--r-- 1 root root 7.2M Nov 14 15:08 btrfs.ko For the new btrfs.ko. What is different between these two modules? Could this be caused by some faulty setting for the new kernel? When producing the kernel I copied /proc/config.gz and used make oldconfig to update it, so all optimisations should be the same for both kernels. Or is there something else which is being done to the modules before they are put into the initrd? Maybe is there even some better way to build a new initrd for the new kernel in ubuntu altogether. Update: I just also tested with an initrd which I created from scratch using the mkinitrfs command within ubuntu, and it has the same size difference that I found for the initrd I manually updated.

    Read the article

  • XP - ping changes routing table?

    - by Corelgott
    Hey Folks, I have got a real strange behaviour with one of my XP-Sp3 machines. Setup: A Server in the lan (192.168.5.0) proviedes access to all roadwarriors in 10.8.0.0 The DCHP has a static route for all clients pronouncing 192.168.5.235 as gateway for 10.8.0.0 All Clients can ping & access the vpn-machines; everything works like a charm But one Xp-Sp3 is not willing to connect to them. It gets all the same routes as any other sytem in the lan and I trippel-checked - there are no static routes on this machine When I ping any 10.8.0.0 device from this machine, the first two packaged work like a charm; but the next two (and any package after them) fail and get lost. When I look back into the routing table: There is a new route; a special one just for the device I pinged, which points to the right gateway - but which wasn't there earlier... As Long as this route exists the machine can't ping anything on 10.8.0.0. But if I remove the route by hand: The next to ping packages work fine... Has anybody got an idea about that? Anybody every seen such a behaviour? Any hint / help / tip is greatly appreachiated! thx in advance Corelgott Ps: I attach an image of the cmd to clarify things - its in german, but reading a routing table shouldn't be that hard...

    Read the article

  • How to import this data set into excel? (column headings on each row delimited by a colon)

    - by Anonymous
    I'm trying to import the following data set into Excel. I've had no luck with the text import wizard. I'd like Excel to make id, name, street, etc the column names and insert each record onto a new row. , id: sdfg:435-345, name: Some Name, type: , street: Address Line 1, Some Place, postalcode: DN2 5FF, city: Cityhere, telephoneNumber: 01234 567890, mobileNumber: 01234 567890, faxNumber: /, url: http://www.website.co.uk, email: [email protected], remark: , geocode: 526.2456;-0.8520, category: some, more, info , id: sdfg:435-345f, name: Some Name, type: , street: Address Line 1, Some Place, postalcode: DN2 5FF, city: Cityhere, telephoneNumber: 01234 567890, mobileNumber: 01234 567890, faxNumber: /, url: http://www.website.co.uk, email: [email protected], remark: , geocode: 526.2456;-0.8520, category: some, more, info Is there any easy way to do this with Excel? I'm struggling to think of a way to convert this to a conventional CSV easily. As far as I can think, I'd have to remove the labels from each line, enclose each line in quotes, then delimit them with commas. Obviously that's made a little more difficult to script though seeing as some fields (address, for instance) contain comma-delimited data. I'm not good with regex at all. What's the best way to tackle this?

    Read the article

  • mod_rewrite redirect subdomain to folder

    - by kitensei
    I have a wordpress blog at the url http://www.orpheecole.com, I would like to setup 3 subdomains (cycle1, cycle2, cycle3) being redirected to their folders (1 subdomain = 1 wp blog, no multisite enabled) The file tree looks like this: /var/www/orpheecole.com/ /var/www/cycle1.orpheecole.com/ /var/www/cycle2.orpheecole.com/ /var/www/cycle3.orpheecole.com/ the following .htaccess try to redirect to /var/www/orpheecole.com/cycleX instead of its own directory, but id it's possible i'd rather redirect every subdomain to its own www folder. my sites-enabled file for main site is # blog orpheecole <VirtualHost *:80> ServerAdmin [email protected] ServerName orpheecole.com ServerAlias *.orpheecole.com DocumentRoot /var/www/orpheecole.com/ <Directory /var/www/orpheecole.com/> Options -Indexes FollowSymLinks MultiViews Order allow,deny allow from all </Directory> ErrorLog /var/log/apache2/orpheecole.com-error_log TransferLog /var/log/apache2/orpheecole.com-access_log </VirtualHost> and the .htaccess located on /var/www/orpheecole.com/ looks like this <IfModule mod_rewrite.c> RewriteEngine on RewriteCond %{HTTP_HOST} !^www.* [NC] RewriteCond %{HTTP_HOST} ^([^\.]+)\.orpheecole\.com$ RewriteCond /var/www/orpheecole.com/%1 -d RewriteRule ^(.*) www\.orpheecole\.com/%1/$1 [L] # BEGIN WordPress RewriteBase / RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] # END WordPress </IfModule> I tried to remove wordpress directives but nothing change, and the rewrite mod is enabled and working.

    Read the article

  • is there a man in the middle attacking to my server machine?

    - by GongT
    My server works well about half a year. But a strange thing happened (several hours before). This server has two IP-address 58.17.85.19 & 117.21.178.19 When I navigate to http://58.17.85.19, nothing different as before. But http://117.21.178.19 will return a "302 Object moved" and become a "redirect loop" I do some test: ($cmd = "wget http://117.21.178.19/?xx=$RANDOM --max-redirect 0 -S --no-cache -O -") Step by step: run $cmd on my PC and my firend's one (we live in two side of China, far away). - got 302 run $cmd on this server - got 200 OK (content is correct result of index.php) run $cmd on another server in same computer room - got 200 OK telnet from my PC and build an HTTP request (type by hand) - got 200 OK shutdown php-fpm, run $cmd on my PC - got 302 run $cmd on server - 502 Bad Gateway shutdown nginx, run $cmd on both the server and my PC - Connection refused. create iptables rule, refuse any connection to 58.17.85.19:80. run nc -l 80 -k -vvv on server and run $cmd on my PC NC show me that.... Server accept connection (Connection from [my ip]) My connection closed ! (Remove fd xx from list) wget dump out response - got 302 I know that, normaly, NC will accept connection, then dump HTTP request from client, and client will wait for response. this connection will open forever(infact client will close connection becouse timeout), becouse NC can't give any response. So... where my request gone? who send an response to the client? some virus on my server system? If so, why 58.17.85.19 didn't has this error? or... I was attacked by a middleman?

    Read the article

  • How can I make my Super keys (Windows Key) behave more like Ctrl/Alt/Shift in Linux

    - by deltaray
    After using the Ctrl + "arrow keys" for 13 years to switch virtual desktops in X windows, I've been convinced recently to change to using the Super keys instead (the windows key and the context menu key, which I've remapped). This all works fine for the most part. However, something is still picking up the key events that these keys are sending as if they are a normal alphanumeric like key. For example, I first noticed this in Google Docs spreadsheet that if I press the windows key alone over top of a cell, that it starts editing that cell. It doesn't insert anything, it just sends a key event that Firefox sees and starts editing the cell. This caused problems on a collaborative document I was working on as the way Google docs works, it led to me accidentally erasing the data in a few fields before I realised what was going on. I like using the super keys, but I want them to behave more like a Ctrl or Alt key does in that its a modifier key and doesn't send anything until a second key is pressed. My setup is the following: Ubuntu 10.10 XFCE 4 Microsoft Natural Ergo 4000 keyboard (with the logo scratched out) The following is my .Xmodmap file: remove Lock = Caps_Lock keycode 66 = Escape ! The below maps my other windows context menu key. keycode 135 = Super_R Edit: As requested, here is the relevant output from xev for a keypress and keyrelease of my Super_L (left windows key) KeyPress event, serial 34, synthetic NO, window 0x8200001, root 0x15d, subw 0x0, time 2428849342, (177,174), root:(182,228), state 0x10, keycode 133 (keysym 0xffeb, Super_L), same_screen YES, XLookupString gives 0 bytes: XmbLookupString gives 0 bytes: XFilterEvent returns: False KeyRelease event, serial 34, synthetic NO, window 0x8200001, root 0x15d, subw 0x0, time 2428849430, (177,174), root:(182,228), state 0x50, keycode 133 (keysym 0xffeb, Super_L), same_screen YES, XLookupString gives 0 bytes: XFilterEvent returns: False

    Read the article

  • Nvidia RAID 1 Problem. Degraded drives...

    - by Vedat Kursun
    I had a RAID 1 on my system which has a Gigabyte GA 8N SLI motherboard with a Nvidia chipset.(Nvidia Raid IDE ROM BIOS 4.84) When the system was working probably there used to be an icon on the system try which showed my two RAID disks. Bu after my friend accidentally clicked on the "Remove drive safely" icon while trying to disconnect her USB, I noticed that the RAID system wasn't working. After a reboot there was suddenly a failure message during boot screen. When I enter the Nvidia RAID setup utility (F10) I can see that both drives are degraded and that won't change even if I get into them and press R for Rebuild. Other options are only Delete and Exit. When I boot to Windows (XP Pro 32 Bit) I can see both my disks with the same data on each of them but my RAID 1 is broken. It's a relief to see that at least my RAID 1 was active but it's annoying not being able to rebuild it. Is there a way where I can rebuild my RAID 1 without having to delete the array and build it again? Cause I don't want to backup 400 Gigs of data and then recopy it to my drives... (Disks 2 x Seagate ST3500418 AS SATA Drives)

    Read the article

  • Nvidia RAID 1 Problem. Degraded drives...

    - by Vedat Kursun
    I had a RAID 1 on my system which has a Gigabyte GA 8N SLI motherboard with a Nvidia chipset.(Nvidia Raid IDE ROM BIOS 4.84) When the system was working probably there used to be an icon on the system try which showed my two RAID disks. Bu after my friend accidentally clicked on the "Remove drive safely" icon while trying to disconnect her USB, I noticed that the RAID system wasn't working. After a reboot there was suddenly a failure message during boot screen. When I enter the Nvidia RAID setup utility (F10) I can see that both drives are degraded and that won't change even if I get into them and press R for Rebuild. Other options are only Delete and Exit. When I boot to Windows (XP Pro 32 Bit) I can see both my disks with the same data on each of them but my RAID 1 is broken. It's a relief to see that at least my RAID 1 was active but it's annoying not being able to rebuild it. Is there a way where I can rebuild my RAID 1 without having to delete the array and build it again? Cause I don't want to backup 400 Gigs of data and then recopy it to my drives... (Disks 2 x Seagate ST3500418 AS SATA Drives)

    Read the article

  • Setting up logging for a remote backup script

    - by Brian Dainis
    So I wrote up a short script that I am planning to run via a cron job daily to package up my site files and send them to a remote location. I also plan to incorporate DB dumps, but I have not gotten that far yet. My issue today however is that Im am uncertain how to log the output of each command for errors, warnings, or other pertinent information the command may output. I would also like to install sometype of fail safe so if something goes horribly wrong the script will stop dead in its tracks and notify me via email or something. Ok the email thing is not as critical, but would be nice. Does anybody have any ideas for that? Here is what I have so far. By the way, both servers are CentOS 6.2 running standard LAMP. #!/bin/sh ################################# ### Set Vars ################################# THEDATE=`date +%m%d%y%H%M` ################################# ### Create Archives ################################# tar -cf /root/backups/files/server_BAK_${THEDATE}.tar -C / var/www/vhosts gzip /root/backups/files/server_BAK_${THEDATE}.tar ################################# ### Send Data to Remote Server ################################# scp /root/backups/files/server_BAK_${THEDATE}.tar.gz user@host:/home/bak1/ftp/backups/ ################################# ### Remove Data from this Server ################################# rm -rf /root/backups/files/server_BAK_${THEDATE}.tar.gz

    Read the article

  • Nginx conditional not evaluating correctly

    - by cjc
    I'm running into a weird problem with nginx and how it evaluates conditionals. Here's the relevant configuration: set $cors FALSE; if ($http_origin ~* (http://example.com|http://dev.example.com:8000|http://dev2.example.com)) { set $cors TRUE; } if ($request_method = 'OPTIONS') { set $cors $cors$request_method; } if ($cors = 'TRUE') { add_header 'Access-Test' "$cors"; add_header 'Access-Control-Allow-Origin' "$http_origin"; add_header 'Access-Control-Allow-Methods' 'POST, OPTIONS'; add_header 'Access-Control-Max-Age' '1728000'; } if ($cors = 'TRUEOPTIONS') { add_header 'Access-Test' "$cors"; add_header 'Access-Control-Allow-Origin' "$http_origin"; add_header 'Access-Control-Allow-Methods' 'POST, OPTIONS'; add_header 'Access-Control-Allow-Headers' 'X-Requested-With, X-Prototype-Version'; add_header 'Access-Control-Max-Age' '1728000'; add_header 'Content-Type' 'text/plain'; } So, the conditional blocks never trigger. When I remove the conditions, I see that the "Access-Test" header and the "Access-Control-Allow-Origin" set correctly, but, as noted, enabling the conditionals causes the headers not to be sent. I'm testing by running: curl -Iv -i --request "OPTIONS" -H "Origin: http://example.com" http://staging.example.com/ Am I missing something obvious? I've tried the "if" with and without quotes, etc. This is nginx 1.2.9.

    Read the article

  • windows 7 clock jumps back about every hour + internet disconnects

    - by IlyaD
    I have the weirdest problem, for the last few days every hour or so (not exact) the clock jumps about back in jumps of 15 to 60 minutes and after some time the internet disconnects. i can reconnect the internet by unplugging and plugging back the network cable. This is the second time this problem happens to me, the first time was when I installed VirtualBox. whaen I removed it the problem was gone. This time I have no idea what triggered it, the only thing I installed about the time the problem started was iTunes update (10.5.1) I tried removing it but the problem remained. (also tried to remove other programs i installed day or 2 before and it also didn't help) also, this is not a VM and currently I don't have any VM software. any ideas..? UPDATE since the solution is in the comments i'll write it here - apparently the windows time service got broken, this is the fix: http://technet.microsoft.com/en-us/library/cc738995(WS.10).aspx (run "fix it" from IE browser only) UPDATE 2 The fix didn't work, about 2 hours passed and first the internet disconnected and then the clock jumped back... This probably means that the problem is not only in the clock

    Read the article

  • Exchange migration: ExchangeTransport warnings after uninstalling source server

    - by carlpett
    After disabling/uninstalling Exchange from our source SBS2003 server, I'm getting these warnings: Event 5020 "The topology doesn't contain a route to Exchange 2000 Server or Exchange Server 2003 sourceserver.domain.local in Routing Group [...]" Event 5006 "Cannot find route to Mailbox Server CN=SOURCESERVER [...] for store CN=[...]", for Public folder, First storage group and Recovery storage group. I followed the technet article here: http://technet.microsoft.com/en-us/library/bb288905.aspx (linked from the SBS 2003 - 2011 migration guide). When uninstalling Exchange, I got a warning about NNTP not being found in the registry, but that didn't seem relevant, and the uninstall continued. The server was subsequently removed from the domain and shut down, as per the instructions. If I open the Public Folder Management console on the Exchange 2010 server, the public folders \NON_IPM_SUBTREE\EFORMS_REGISTRY and \Archived mails gives an error on "Update content". I haven't found anything else which indicates something is wrong. We never really used the public folders on the old server, so there isn't really anything lost. Can I just remove these folders and let them be created anew?

    Read the article

  • How to optimize a postgreSQL server for a "write once, read many"-type infrastructure ?

    - by mhu
    Greetings, I am working on a piece of software that logs entries (and related tagging) in a PostgreSQL database for storage and retrieval. We never update any data once it has been inserted; we might remove it when the entry gets too old, but this is done at most once a day. Stored entries can be retrieved by users. The insertion of new entries can happen rather fast and regularly, thus the database will commonly hold several millions elements. The tables used are pretty simple : one table for ids, raw content and insertion date; and one table storing tags and their values associated to an id. User search mostly concern tags values, so SELECTs usually consist of JOIN queries on ids on the two tables. To sum it up : 2 tables Lots of INSERT no UPDATE some DELETE, once a day at most some user-generated SELECT with JOIN huge data set What would an optimal server configuration (software and hardware, I assume for example that RAID10 could help) be for my PostgreSQL server, given these requirements ? By optimal, I mean one that allows SELECT queries taking a reasonably little amount of time. I can provide more information about the current setup (like tables, indexes ...) if needed.

    Read the article

  • sudoer scheme to allow useful access to another web developer yet retain future control of a virtual

    - by Tchalvak
    Background: Virtual Private Server I have a virtual private server that I'm looking to host multiple websites on, and provide access to another web developer. I don't care about putting too many constraints on him, though I wouldn't mind isolating the site that he'll be developing from other sites on the server that I will develop. The problem: retain control Mainly what I want is to make sure that I retain control over the server in the future. I want to reserve the ability to create/promote/demote and other administrative functions that don't deal with web software. If I make him an admin, he can sudo su - and become root and remove root control from me, for example. I need him not to be able to: take away other admin permissions change the root password have control over other security/administrative functions I would like him to still be able to: install software (through apt-get) restart apache access mysql configure mysql/apache reboot edit web development configuration type files in /etc/ Other Standard Setups would be happily considered I've never really set up a good sudoers file, so simple example setups would be very useful, even if they're only somewhat similar to the settings that I'm hoping for above. Edit: I have not yet finalized permissions, so standard, useful sudo setups are certainly an option, the lists above are more what I'm hoping I can do, I don't know that that setup can be done. I'm sure that people have solved this type of problem before somehow, though, and I'd like to go with something somewhat tested as opposed to something I've homegrown.

    Read the article

  • How to Dual-Boot Kali-Linux and Windows 8.1?

    - by Ceyhun
    I have Acer V3-772G 1 TB Harddisk. I shrinked my biggest partition in order to install Kali Linux. When installing Kali, GRUB couldn't detect windows 8 so I kept going on(I installed grub as my masterboot). After installed Kali there was no way to boot Windows 8.1, but booting Kali was OK with GRUB in legacy-BIOS. When I tried to change bios to UEFI it couldn't find any OS (took too much time, nearly 1 hour). So I tried to update GRUB with boot-repair within a Ubuntu Live USB. But after updating GRUB I terrified, in UEFI and Legacy mode grub couldn't find ANY OS (Both Kali and Windows) so I have no option other than using Ubuntu Live. I tried every possible options but nothing has worked for me. I tried rEFInd in UEFI mode it worked only for Kali. I still cannot boot my windows 8.1 . I considered to restore to factory setting with a Windows Rescue USB but kept telling me "No driver found". Please help me to dual boot or remove Kali and restore my Windows 8.1

    Read the article

  • Configure X connections over TCP without using an X connection

    - by Darren Cook
    I want to run a GUI application on a remote machine I only have ssh access to. I don't need to, or want to, see the GUI window. (I know I could use something like ssh -C -X remote_server if I wanted the GUI to be on my client.) I know X is running on the remote machine, as ps shows this: root ... /usr/bin/Xorg :0 -br -audit 0 -auth /var/gdm/:0.Xauth -nolisten tcp vt7 I set DISPLAY=:0.0 but I then get "Xlib: connection to ":0.0" refused by server" when I try to use it. At Get remote x display working in linux without ssh tunneling and Xserver doesn't work unless DISPLAY=0.0 I see the advice to use gdmsetup to allow X to listen on TCP. But, gdmsetup is a GUI application! And trying to run it over ssh -X did not work ("X11 connection rejected because of wrong authentication"). So, is there a text file I can edit to remove -nolisten? And, after editing it, how do I safely restart X, remotely? (There is other stuff running on this machine, so requesting a reboot is possible, but undesirable.) If not, should gdmsetup be able to run over ssh and I should persevere in that direction? UPDATE: I had to do the ssh -X session as root (ssh as a normal user, then sudo or su, does not work.) So, I did the edit with gdmsetup. I then restarted X with gdm-restart. I've also done xhost + from that ssh -X session. The ps line no longer shows the -nolisten tcp part. But still no luck connecting to it, with either DISPLAY=:0 or DISPLAY=localhost:0

    Read the article

  • Diagnosing RAM issues

    - by TaylorND
    I have an old Acer Aspire T180 desktop. The specs are as follows: AMD Athlon 64 3800+ 2.4GHz 1GB DDR2 SDRAM 160GB DVD-Writer (DVD±R/±RW) Gigabit Ethernet 17" Active Matrix TFT Color LCD Windows Vista Home Basic Mini-tower AST180-UA381B According to the information in the computer's documentation the computer comes with 1 GB of RAM. It has two DDR2 SDRAM sticks. I used to have Windows Vista installed. Then I removed it and install Windows 7, and now I have since removed Windows 7 and installed Windows XP. According to Windows XP with both RAM sticks in the computer has 768 MB. Isn't this supposed to be 1 GB of RAM or 1024 MB of RAM? Is the amount of RAM installed only partly used by the Operating System? Is there's something I'm missing? If I remove either one of the RAM sticks I'm left with 448 MB of RAM. These numbers don't seem to add up. If each of the RAM sticks contains at least 448 MB of RAM shouldn't they (both being in) provide 896 MB of RAM. Even then, isn't that less than a GB of RAM? I'm not too experienced in hardware so I thought this would be the best place to ask. As a follow up question, is the RAM I have enough to run/multitask with Windows XP efficiently? I plan to do a lot of computing with the system (although not gaming), should I invest in more RAM?

    Read the article

  • Blue screen of Death on Install

    - by Toby Allen
    I have a machine with Windows Vista Installed. It has an Intel X25 SSD as the System Drive I want to reinstall (I plan to format and overwrite Vista) with XP. When I boot up using the Dell XP CD it loads the initial drivers then i get a Blue Screen. This is quite concerning. The installed OS works ok, but its giving problems so I want to remove it. Should I just format the SSD and try again? Will this make any difference? Can I do something to avoid hitting the Blue Screen? Its possible I had corrupt sectors on one of the other disks, will a new XP install use the System drive or drive 0? Can I force the install to use a specific drive when installing? Error: *** STOP: 0x0000007B (0xF78D2524,0x0000034,0x00000000,0x00000000) I never did find the answer, however I removed the SSD and tried to install on other disk - CRASH I disconnected the other disk and tried to install with only SSD plugged in - CRASH I removed 1 block of RAM - CRASH I used a windows 7 CD - NO CRASH

    Read the article

  • How to redirect or rewrite IIS site with port in URL to URL without port?

    - by user2573690
    I'm not 100% sure if this is the right part of StackOverflow to post this but to me it made the most sense. Sorry if its not! Currently I have a site in IIS configured on HTTPS with port 7500. I can access this site by using the URL: https://portal.company.com:7500. What I would like to do is remove the port number at the end of the URL so users can access this site using https://portal.company.com... I am a complete beginner with IIS, but what I have tried is the HTTP Redirect, which if I used on this IIS site, would redirect a user that hits portal.company.com:7500 to some other site, which is not what I need. Another thing I have though about is creating another IIS site which serves the purpose of being at the URL portal.company.com and when its hit, it redirects to my portal.company.com:7500, but I don't know if this is the best approach. So my question is, what are my options for achieving the behavior mentioned above and what is the best/recommended approach? I haven't played with URL Rewriting before but I will look into that now while I wait for a reply. Thanks!! Using IIS Manager on a Windows Server 2008 machine.

    Read the article

  • Apache file appearing in directory list, but giving 404 when attempting access?

    - by aayush
    Please forgive my lack of knowledge, this is more of a learning project than anything else. I have a linux box, and it works pretty much fine. When i go to example.com/css it says theres one file in there, bootstrap.min.css When i go to example.com/css/bootstrap.min.css, it gives me a 404 error. I have only one htaccess file to remove the index.php from the url, which also i renamed to htaccess (Instead of .htaccess, so apache wont find it) and i restarted the server, yet no help. I also tried to chmod the css file 755 but no help. Contents of the htaccess file: RewriteEngine On RewriteBase / # Allow any files or directories that exist to be displayed directly RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ index.php?/$1 [L,QSA] Please help, i am very confused about this. I tried to google excessively but i came up with nothing. Edit: I found the solution to be renamed the htaccess file to something entirely different and restarting. Is there any way i can still implement the losing of the .php?

    Read the article

< Previous Page | 374 375 376 377 378 379 380 381 382 383 384 385  | Next Page >