Search Results

Search found 89638 results on 3586 pages for 'file table'.

Page 2092/3586 | < Previous Page | 2088 2089 2090 2091 2092 2093 2094 2095 2096 2097 2098 2099  | Next Page >

  • Install Windows 7 from ISO image

    - by Albert
    Hi, I have an ISO file of the Windows 7 DVD and I want to install it on my PC which currently only runs Linux. I don't have any DVD drive. I have some unpartitioned space on one disk where I want to install it in. When I am doing this for Linux, I usually just create the partitions from the running system, format them, mount them, copy files over, chroot into it, setup the stuff and I can boot into it (or I use some of the uncountable available scripts which do exactly that automatically). However, I have no idea how to do the same thing with Windows. So far, I tried with VMware, i.e. I gave it direct full access to the disk where I want to install it in, installed it there, then tried to boot natively into it. The Windows logo showed up but after maybe 3 seconds or so, it crashes. Safe mode also crashes. I already expected that this probably would exactly behave the way it does right now because I have heard that Windows is quite sensible about hardware changes (i.e. the VMware hardware and the real hardware). However, how can I fix it now that it works? Or I could also just delete it again and try just over. But how exactly? I also searched for ways to boot directly into an ISO file. There seem to be ways to do that via GRUB (and maybe some additional boot loader), although quite complicated. I already tried one method (GRUB: map ...iso (hdX)), however, that didn't worked. Also, even if it does work, I will get into trouble when I boot into the newly installed Windows and it requests for the DVD (because it does that at the first boot into the new system). Seems all quite complicated. Isn't there some easy way like I would do it for Linux? Or what would be the easiest way to get what I want?

    Read the article

  • NFS Datastore Appears Empty!

    - by daemonchild
    Hi guys, I've got an NFS server problem. The datastore connected and seems to be a valid datastore in both the vSphere client and under /vmfs/volumes. The issue is that it appears to be empty! I can create files (eg: touch /vmfs/volumes/nfs_common/thefile) and it is correctly written to the nfs store. I can verify this by looking on the nfs server itself. But the vmkernel only sees an empty datastore; the file disappears. Another freebsd box can mount the same NFS share and see the files correctly. Some useful data: ESXi 4.0.0 Build 208167 NFS is unfsd running on a Buffalo Linkstation Pro Duo (a bit hacky I know). The share has file system permissions set to 777 at the moment. My /etc/exports is as follows, and as I say it connects fine. /mnt/array1/ESX_Shared 192.168.16.0/255.255.255.0(insecure,rw,sync,no_root_squash,no_subtree_check) The ESXi servers can also successfully mount NFS shares from other NFS servers. Any ideas guys? Thanks, Tom

    Read the article

  • TFTP Timing Out on Ubuntu VM

    - by valsidalv
    I'm running a Windows 7 PC with VMware installed which has my Ubuntu (10.04 Lucid Lynx). I recently installed a DHCP server and TFTP (Xinet tftpd) using these instructions. I've mapped a network drive so that my Windows has access to all the files in my VM through a 192.x.x.x IP address. I'm trying to throw some custom firmware onto a router. The router has its own built-in TFTP utility that will download the image. It successfully manages to do everything but it is slow because it writes it to flash memory. There is another method that is much quicker because it writes to RAM directly but it must use the TFTP server in Ubuntu. The issue I'm facing is that the Ubuntu TFTP transfer seems to be timing out. The transfer starts but never goes past ~60%. Here's my /etc/xinetd.d/tftp file (similar to a known working config): service tftp { protocol = udp port = 69 socket_type = dgram wait = yes user = nobody server = /usr/sbin/in.tftpd server_args = -s /home/user/tftp/ disable = no cps = 300 2 per_source = 60 } I've done some searching but can't find any parameters for this file to control timeout time or number of retries. The last two arguments (cps, per_source) and completely alien to me (can anyone explain). I have a few possible solutions but the easiest would be to get this TFTP server working. Can anyone help? Either with a timeout configuration or maybe even recommend a different TFTP server? Thanks!

    Read the article

  • Rename Active Directory domain following Windows 2000 -> 2008 migration.

    - by ewwhite
    I'm working with a site that needs an internal DNS domain rename. It currently has a DNS name of domain.abc.com and NT name of ABC. I'm trying to get to a DNS name of abctrading.com and NT name of ABCTRADING. Split DNS would be used. The site originally ran from a single Windows 2000 domain controller hosting AD, file, print, DHCP and DNS services. There was no Exchange system in the environment. The 50 client PCs are all Windows XP with a handful of users using roaming profiles. All users are in a single OU and there are no group policy/GPOs. I'm a Linux engineer, but have been trying to guide another group of consultants to reach a more suitable setup. With the help of this group, we were able to move the single Windows 2000 system to a set of Windows 2008 R2 servers separated into domain controller and file/print systems (virtualized). We are also trying to add an Exchange 2010 system to this mix. The Windows 2000 server was demoted and is no longer in the picture. This is the tricky part, as client wants the domain renamed and the consultants aren't quite sure how to get through it without another 32-40 hours of testing/implementation. THey say that there's considerable risk to do the rename without a completely isolated test environment. However, this rename has to be done before installing Exchange. So we're stuck at this point. I'd like to know what's involved in renaming the domain at this point. We're on Windows Server 2008. The AD is healthy now. Coming from a Linux background, it seems as though there should be a reasonable path to this. Also, since the original domain appears to be a child/subdomain, would that be a problem here. I'd appreciate any guidance.

    Read the article

  • Scripting an 'empty' password in /etc/shadow

    - by paddy
    I've written a script to add CVS and SVN users on a Linux server (Slackware 14.0). This script creates the user if necessary, and either copies the user's SSH key from an existing shell account or generates a new SSH key. Just to be clear, the accounts are specifically for SVN or CVS. So the entry in /home/${username}/.ssh/authorized_keys begins with (using CVS as an example): command="/usr/bin/cvs server",no-port-forwarding,no-agent-forwarding,no-X11-forwarding,no-pty ssh-rsa ....etc...etc...etc... Actual shell access will never be allowed for these users - they are purely there to provide access to our source repositories via SSH. My problem is that when I add a new user, they get an empty password in /etc/shadow by default. It looks like: paddycvs:!:15679:0:99999:7::: If I leave the shadow file as is (with the !), SSH authentication fails. To enable SSH, I must first run passwd for the new user and enter something. I have two issues with doing that. First, it requires user input which I can't allow in this script. Second, it potentially allows the user to login at the physical terminal (if they have physical access, which they might, and know the secret password -- okay, so that's unlikely). The way I normally prevent users from logging in is to set their shell to /bin/false, but if I do that then SSH doesn't work either! Does anyone have a suggestion for scripting this? Should I simply use sed or something and replace the relevant line in the shadow file with a preset encrypted secret password string? Or is there a better way? Cheers =)

    Read the article

  • Error related to pkg-config when installing frei0r as part of another package

    - by Anentropic
    I am trying to build https://github.com/mltframework/shotcut on OS X Lion (using their script in scripts/build_shotcut.sh) and after numerous hurdles I'm stuck on this error: ./configure: line 16062: syntax error near unexpected token `OPENCV,' ./configure: line 16062: `PKG_CHECK_MODULES(OPENCV, opencv >= 1.0.0, HAVE_OPENCV=true, true)' ERROR: Unable to configure frei0r From what I already googled this means that the PKG_CHECK_MODULES macro hasn't been defined, which probably means there's something wrong with my pkg-config, which I installed via Homebrew. Sounds like the pkg.m4 file isn't found. When I brew install pkg-config I get the following warning: Warning: m4 macros were installed to "share/aclocal". Homebrew does not append "/usr/local/share/aclocal" to "/usr/share/aclocal/dirlist". If an autoconf script you use requires these m4 macros, you'll need to add this path manually. Well I've appended that line to the dirlist file and it doesn't fix the problem above. Can anyone suggest a way forward here? I have briefly tried building my own pkg-config from source but (bizarrely) when I tried to ./configure I got the following error: checking for pkg-config... no ./configure: line 13540: --exists: command not found configure: error: pkg-config and glib-2.0 not found, please set GLIB_CFLAGS and GLIB_LIBS to the correct values if building pkg-config needs pkg-config it seems like a weird catch 22 situation... I think this is probably an unnecessary sidetrack anyway.

    Read the article

  • Can I get all active directory passwords in clear text using reversible encryption?

    - by christian123
    EDIT: Can anybody actually answer the question? Thanks, I don't need no audit trail, I WILL know all the passwords and users can't change them and I will continue to do so. This is not for hacking! We recently migrated away from a old and rusty Linux/Samba domain to an active directory. We had a custom little interface to manage accounts there. It always stored the passwords of all users and all service accounts in cleartext in a secure location (Of course, many of you will certainly not think of this a being secure, but without real exploits nobody could read that) and disabled password changing on the samba domain controller. In addition, no user can ever select his own passwords, we create them using pwgen. We don't change them every 40 days or so, but only every 2 years to reward employees for really learning them and NOT writing them down. We need the passwords to e.g. go into user accounts and modify settings that are too complicated for group policies or to help users. These might certainly be controversial policies, but I want to continue them on AD. Now I save new accounts and their PWGEN-generated (pwgen creates nice sounding random words with nice amounts of vowels, consonants and numbers) manually into the old text-file that the old scripts used to maintain automatically. How can I get this functionality back in AD? I see that there is "reversible encryption" in AD accounts, probably for challenge response authentication systems that need the cleartext password stored on the server. Is there a script that displays all these passwords? That would be great. (Again: I trust my DC not to be compromised.) Or can I have a plugin into AD users&computers that gets a notification of every new password and stores it into a file? On clients that is possible with GINA-dlls, they can get notified about passwords and get the cleartext.

    Read the article

  • installed mysql using yum but mysqld dowsnt start: Fedora 16

    - by Sumit Singh Bir
    i installed mysql mysql-server and mysql-libs ... after installation i thought to start the mysql service using command systemctl start mysqld.service Failed to issue method call: Unit mysqld.service failed to load: Invalid argument. See system logs and 'systemctl status mysqld.service' for details. systemctl status mysqld.service this said that it had an invalid argument then i changed the content of mysqld.service with this File: /etc/systemd/system/mysqld.service [Unit] Description=MySQL database server After=syslog.target After=network.target [Service] User=mysql Group=mysql ExecStart=/usr/sbin/mysqld --pid-file=/var/run/mysqld/mysqld.pid ExecStop=/bin/kill -15 $MAINPID PIDFile=/var/run/mysqld/mysqld.pid # We rely on systemd, not mysqld_safe, to restart mysqld if it dies Restart=always # Place temp files in a secure directory, not /tmp PrivateTmp=true [Install] WantedBy=multi-user.target now i found that the error for invalid argument was resolved and mysqld.service was loaded but not enabled ... using systemctl start mysqld.service worked fine ... it workedddd! but then enabling it systemctl enable mysqld.service or service mysqld start did not do anything but the curdor kept on blinking after i pressed enter ... now the thing is that for this f16 i wasted my HDD space for development work and i cannot figure out a way ... please somebody help

    Read the article

  • How to interpret iozone values

    - by Henno
    I ran a test to measure my I/O IOPS on Linux: iozone -s 4g -r 2k -r 4k -r 8k -r 16k -r 32k -O -b /tmp/results.xls iozone claims that output is in operations per second yet the numbers are too big for that to be plausible. I'm observing some 320 CMDs/s maximum on vmware esx console (esxtop, then v). File size set to 4194304 KB Record Size 2 KB Record Size 4 KB Record Size 8 KB Record Size 16 KB Record Size 32 KB OPS Mode. Output is in operations per second. Command line used: iozone -s 4g -r 2k -r 4k -r 8k -r 16k -r 32k -O -b tmpresults.xls Time Resolution = 0.000001 seconds. Processor cache size set to 1024 Kbytes. Processor cache line size set to 32 bytes. File stride size set to 17 * record size. random random bkwd record stride KB reclen write rewrite read reread read write read rewrite read fwrite frewrite fread freread 4194304 2 19025 5580 27581 29848 284 198 415 1103217 1498 18541 4340 24245 25618 4194304 4 15650 21942 18962 21068 252 1198 193 976164 1677 22802 23093 21089 21232 4194304 8 11121 11638 10273 10165 247 1196 202 625020^C The test ran for 15 hours before I pressed ^C. Is that ordinary expectation for such command line (dedicated 4 drive RAID10 LUN, 10k RPM SAS drives in EMC CX300)?

    Read the article

  • grep command is not search the complete pattern

    - by Sumit Vedi
    0 down vote favorite I am facing a problem while using the grep command in shell script. Actually I have one file (PCF_STARHUB_20130625_1) which contain below records. SH_5.55916.00.00.100029_20130601_0001_NUC.csv.gz|438|3556691115 SH_5.55916.00.00.100029_20130601_0001_Summary.csv.gz|275|3919504621 SH_5.55916.00.00.100029_20130601_0001_UI.csv.gz|226|593316831 SH_5.55916.00.00.100029_20130601_0001_US.csv.gz|349|1700116234 SH_5.55916.00.00.100038_20130601_0001_NUC.csv.gz|368|3553014997 SH_5.55916.00.00.100038_20130601_0001_Summary.csv.gz|276|2625719449 SH_5.55916.00.00.100038_20130601_0001_UI.csv.gz|226|3825232121 SH_5.55916.00.00.100038_20130601_0001_US.csv.gz|199|2099616349 SH_5.75470.00.00.100015_20130601_0001_NUC.csv.gz|425|1627227450 And I have a pattern which is stored in one variable (INPUT_FILE_T), and want to search the pattern from the file (PCF_STARHUB_20130625_1). For that I have used below command INPUT_FILE_T="SH?*???????????????US.*" grep ${INPUT_FILE_T} PCF_STARHUB_20130625_1 The output of above command is coming as below PCF_STARHUB_20130625_1:SH_5.55916.00.00.100029_20130601_0001_US.csv.gz|349|1700116234 I have two problem in the output, first is, only one entry is showing in output (It should contain two entries) and second problem is, output contains "PCF_STARHUB_20130625_1:" which should not be came. output should come like below SH_5.55916.00.00.100029_20130601_0001_US.csv.gz|349|1700116234 SH_5.55916.00.00.100038_20130601_0001_US.csv.gz|199|2099616349 Is there any technique except grep please let me know. Please help me on this issue.

    Read the article

  • .htaccess redirect to error page if port is not 80

    - by Momo
    I'm running a portable server through usb stick. The thing is I also have WAMP installed in my local machine and Apache somehow gets started on windows startup, because of some random reason which I don't recall now and it can't be changed. I want to prepare my portable server in situations like this, so closing httpd.exe from process and starting my portable server is not an option. Anyway, because of already active httpd.exe my portable server's WordPress site can only be accessed through localhost:81 - this is a problem as WP site is very dependent on the URL and I don't want to include the url with port on WP database. Here is what I want to do through .htaccess: On any path except for error.php file check if not port 80 If not port 80 redirect to /error.php?code=port It it possible for it to have priority over WP redirection or URL handling? In the error.php I provided info on how to manually close httpd.exe and such so my family and friends can access the portable site. It's sort of like a gallery and calender application for events and other such stuff... Please help? I'm I can't figure it out at all. I know others may not have apache already running, but I want to prepare for such a situation. Something like the following, but the following doesn't work. # BEGIN WordPress <IfModule mod_rewrite.c> <If "%{SERVER_PORT} = 80"> RewriteEngine On RewriteBase / RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </If> <Else> RewriteEngine On RewriteRule ^(error.php)($|/) - [L] RewriteRule ^(.*)$ /error.php?code=port [L] </Else> </IfModule> # END WordPress By the way, the portable server Server2Go automatically generates vhosts based o the hostname set on it's config file and changes ports if the port (e.g. 80) is already open.

    Read the article

  • My client's solution of a Windows SBS 2011 VM on an Ubuntu host and VirtualBox is pinning the host CPU

    - by Scott Stamp
    Here's my situation, I've got a client hosting two servers (one VM), with the host providing VMware Zimbra, the other Windows Small Business Server 2011. Unfortunately, the person before me had configured this setup as follows. Host: Ubuntu Desktop Edition 10.04 (I know, again, not my choice) running VMware Zimbra 8GB of RAM On-board RAID1 of two 320GB Seagate Barracuda drives for the OS Software RAID5 of four 500GB WD Caviar Black drives on MDADM for bulk storage (sorry, I don't know the model #) A relatively competent quad-core Intel Core i7 CPU from the Nehalem architecture (not suspicious of this as the bottleneck) Guest: Windows Small Business Server 2011 4GB of RAM Host-equivalent CPU allocation VDI file for OS hosted on the on-board RAID, VDI file for storage hosted on the on-board RAID For some reason when running, the VM locks up when sitting nearly idle, and the VirtualBox process reports values of 240%+ in top (how is that even possible?!). Anyone have any ideas or suggestions? I'm totally stumped on this one. Happy to provide whatever logs you'd like to take a look at. Ideally I'd drop VirtualBox and provision this with VMware Workstation, but the client has objected to the (very nominal) costs involved. If hardware needs to be purchased to help, it will be, but we're considering upgrades a last-resort at this time. Thanks in advance! *fingers crossed*

    Read the article

  • Triple monitor setting in Linux with USB-HDMI adapter

    - by Oscar Carballal
    I'm trying to set up a triple monitor desktop at my office using Fedora 17, but it seems impossible, let me explain the setting: Laptop ASUS K53SD with 2 graphic cards, Intel and nVidia (Screen controled by Intel card) 24" Full HD monitor connected to the HDMI output (controlled by Intel card) 23" Full HD monitor connected to an USB-HDMI adapter (via framebuffer in /dev/fb2, apparently) VGA output (not used) controlled by nVidia card First of all, the USB-HDMI adapter works perfectly, it gives me a green screen (which means the communication is OK) and I can make it work if I set up a single monitor setting via framebuffer in Xorg. Here I leave the page where I got the instructions: http://plugable.com/2011/12/23/usb-graphics-and-linux Now I'm trying to set up the the two main monitors (laptop and 24") with the intel driver and the 23" with the framebuffer, but the most succesful configuration I get is the two main monitors working and the third disconnected. Do you have any idea what can I do to make this work? Here I leave my xRandr output and my Xorg conf: -> xrandr Screen 0: minimum 320 x 200, current 3286 x 1080, maximum 8192 x 8192 LVDS1 connected 1366x768+0+0 (normal left inverted right x axis y axis) 344mm x 193mm 1366x768 60.0*+ 1024x768 60.0 800x600 60.3 56.2 640x480 59.9 VGA2 disconnected (normal left inverted right x axis y axis) HDMI1 connected 1920x1080+1366+0 (normal left inverted right x axis y axis) 531mm x 299mm 1920x1080 60.0*+ 50.0 25.0 30.0 1680x1050 59.9 1680x945 60.0 1400x1050 74.9 59.9 1600x900 60.0 1280x1024 75.0 60.0 1440x900 75.0 59.9 1280x960 60.0 1366x768 60.0 1360x768 60.0 1280x800 74.9 59.9 1152x864 75.0 1280x768 74.9 60.0 1280x720 50.0 60.0 1440x576 25.0 1024x768 75.1 70.1 60.0 1440x480 30.0 1024x576 60.0 832x624 74.6 800x600 72.2 75.0 60.3 56.2 720x576 50.0 848x480 60.0 720x480 59.9 640x480 72.8 75.0 66.7 60.0 59.9 720x400 70.1 DP1 disconnected (normal left inverted right x axis y axis) 1920x1080_60.00 60.0 The Xorg file: # Xorg configuration file for using a tri-head display Section "ServerLayout" Identifier "Layout0" Screen 0 "HDMI" 0 0 Screen 1 "USB" RightOf "HDMI" Option "Xinerama" "on" EndSection ########### MONITORS ################ Section "Monitor" Identifier "USB1" VendorName "Unknown" ModelName "Acer 24as" Option "DPMS" EndSection Section "Monitor" Identifier "HDMI1" VendorName "Unknown" ModelName "Acer 23SH" Option "DPMS" EndSection ########### DEVICES ################## Section "Device" Identifier "Device 0" Driver "intel" BoardName "GeForce" BusID "PCI:0:02:0" Screen 0 EndSection Section "Device" Identifier "USB Device 0" driver "fbdev" Option "fbdev" "/dev/fb2" Option "ShadowFB" "off" EndSection ############## SCREENS ###################### Section "Screen" Identifier "HDMI" Device "Device 0" Monitor "HDMI1" DefaultDepth 24 SubSection "Display" Depth 24 EndSubSection EndSection Section "Screen" Identifier "USB" Device "USB Device 0" Monitor "USB1" DefaultDepth 24 SubSection "Display" Depth 24 EndSubSection EndSection

    Read the article

  • Why java -version returning a different version than the one defined in JAVA_HOME?

    - by Shekhar
    I am trying to set JAVA_HOME in Ubuntu OS. I have copied jdk 1.7 in /usr/lib/jvm and set JAVA_HOME in /etc/profile file. Contents of /usr/lib/jvm folder are as follows : shekhar@ubuntu:~$ ls /usr/lib/jvm/ default-java java-1.6.0-openjdk java-6-openjdk java-6-openjdk-i386 jdk1.7.0_01 java-1.5.0-gcj-4.6 java-1.6.0-openjdk-i386 java-6-openjdk-common java-7-openjdk-i386 and last few lines of /etc/profile file are as follows : export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_01 export PATH=$PATH:$JAVA_HOME/bin After finishing all this when I run java -version command I get following output : shekhar@ubuntu:~$ java -version java version "1.6.0_24" OpenJDK Runtime Environment (IcedTea6 1.11.4) (6b24-1.11.4-1ubuntu0.12.04.1) OpenJDK Server VM (build 20.0-b12, mixed mode) and when I run ls -lah command I get following output : shekhar@ubuntu:~$ ls -lah /usr/bin/java lrwxrwxrwx 1 root root 22 Sep 29 09:58 /usr/bin/java -> /etc/alternatives/java shekhar@ubuntu:~$ ls -lah /etc/alternatives/java lrwxrwxrwx 1 root root 45 Sep 29 09:58 /etc/alternatives/java -> /usr/lib/jvm/java-6-openjdk-i386/jre/bin/java Can anyone please tell me which thing I am missing? Why Ubuntu is still pointing to open jdk and not to my jdk 7? PS : I have seen this similar question and its answers but that question is related to Windows OS and not for Ubuntu so I am reposting this similar question for Ubuntu.

    Read the article

  • sata hard drive failed

    - by M. Shehryar
    Dear friend, initially my dual core PC started shutdown and re-start by itself after adding 1 GB ram and up-grading the graphic card. Then refused to boot. I restored the window ghost but failed to boot. I tried to install new window but installation failed after coping the window files. tried to install old vista lonhorn. It inspected found errors, fixed them but ultimatly failed to be installed. Once again restored the ghost through acronis but failed to boot. At the end attached as slave with another pc but it was not visible. Even acronis could not see it or its partitions. Only bios can see it. It seems that no file system is available on the drive. My data on drive is very important. Please help me how to revover my data. Drive brand is Samsung, cap is 160 GB and file system was NTFS.

    Read the article

  • sata hard drive failed

    - by M. Shehryar
    Dear friend, initially my dual core PC started shutdown and re-start by itself after adding 1 GB ram and up-grading the graphic card. Then refused to boot. I restored the window ghost but failed to boot. I tried to install new window but installation failed after coping the window files. tried to install old vista lonhorn. It inspected found errors, fixed them but ultimatly failed to be installed. Once again restored the ghost through acronis but failed to boot. At the end attached as slave with another pc but it was not visible. Even acronis could not see it or its partitions. Only bios can see it. It seems that no file system is available on the drive. My data on drive is very important. Please help me how to revover my data. Drive brand is Samsung, cap is 160 GB and file system was NTFS.

    Read the article

  • OS X borked; need to backup outside of Time Machine

    - by rlbgator
    Quick Background: iMac G5 (the white one; 4 years old?) Running Leopard 10.5.something. Time Machine started failing on me; and every time I touch the Finder, things beachball like crazy. Booting from install disk then using Disk Utility to "Repair Disk" also fails. I'm left with the conclusion that I have a corrupt file somewhere important, that's (i) keeping TM from working and (ii) messing with basic functionality. I am not (yet) savvy enough in OS X to know what logs to look in, or how to decipher them - but 'corrupt file' seems to be the likely case, based on my readings of apple.com forum threads. So I think I need to backup, outside of Time Machine, then install fresh OS X on a new drive (or maybe SpinRite the current drive?). I'm able to put a (non-Time Machine) external USB drive on, so I dragged all 3 Users' folders to that... am I done backing up? Am I going to have a massive Permissions problem, trying to put things back together after a re-install? Thanks for reading.

    Read the article

  • conflicting info about the running kernel version in FreeBSD

    - by John
    I asked a related question about uname before, now want to ask from another angle because the following simple yet obvious conflicting outputs may mean there is something many people did not think of (me included). I'm running FreeBSD 9 RELEASE, please see the following commands: # sysctl kern.bootfile kern.bootfile: /boot/kernel/kernel # strings /boot/kernel/kernel |grep RELEASE|grep 9 @(#)FreeBSD 9.2-RELEASE-p7 #0: Tue Jun 3 11:05:13 UTC 2014 FreeBSD 9.2-RELEASE-p7 #0: Tue Jun 3 11:05:13 UTC 2014 9.2-RELEASE-p7 The above kernel file suggests the running kernel is 9.2-RELEASE-p7. But... # dmesg Copyright (c) 1992-2012 The FreeBSD Project. Copyright (c) 1979, 1980, 1983, 1986, 1988, 1989, 1991, 1992, 1993, 1994 The Regents of the University of California. All rights reserved. FreeBSD is a registered trademark of The FreeBSD Foundation. FreeBSD 9.1-RELEASE #0 r243825: Tue Dec 4 09:23:10 UTC 2012 ... # uname -a FreeBSD localhost.localdomain 9.1-RELEASE FreeBSD 9.1-RELEASE #0 r243825: Tue Dec 4 09:23:10 UTC 2012 [email protected]:/usr/obj/usr/src/sys/GENERIC amd64 So dmesg and uname says it's 9.1-RELEASE. I also did an extensive find / -type f -exec grep -l "9.1-RELEASE" {} \; but found no possible kernel file that contains 9.1-RELEASE. What could lead to the above conflict, and what kernel I am actually running? Please note I run RELEASE and ran freebsd-update to do binary update, so no compiled kernel is involved. And I have rebooted multiple times after freebsd-update. And the system is not in jail etc, just the only system on that computer.

    Read the article

  • How to create Windows Vista/Windows 7 Startup disk or Rescue disk or system restore points on a CD?

    - by goldenmean
    Hello, I have two laptops, one having Windows-Vista home premium and other one having Windows-7 professional. Both versions of OS are OEM installs(pre-installed when I bought the laptop) and I do not have the Windows Installation disks for them. usually the installation disks provide a repair option in case one needs to repair/rescue a improper windows installation. But since I dont have the installation disks, I want to create rescue disks/startup disks for these. My questions are : 1] How to create a system rescue disk/startup disk on a CD from these two versions of Windows? 2] Doesn't the system restore points which Vista/Windows-7 create, cannot be created on a CD disk instead of hard-disk? 3] If I have a manual backup of my windows registry, in which I have exported all the registry to a file and I have that file on a CD, how to restore that registry back to the windows installation which might not be booting up properly due to bad registry problem. EDIT: 4]Is there any way to use these system restore points directly during bootup of the laptop, if windows does cannot boot properly due to problem. First laptop is HP Pavillion dv6646 and second one is Sony VAIO VCPEE series. thank you. -AD

    Read the article

  • Making one of the folders default in Apache

    - by OmerO
    Hello, The file & directory structure of my website is as follows: /Library/WebServer/mysite/joomla .. /Library/WebServer/mysite/wiki .. /Library/WebServer/mysite/forum .. /Library/WebServer/mysite/index.php As you see, there are various applications each residing in separate folders. Now, in order to define this structure, I have made this entry in Apache http-vhosts.config file: ServerName mysite.com DocumentRoot "/Library/WebServer/mysite" ** And I already have the DirectoryIndex defined: DirectoryIndex index.html index.php, and so on. So far so good but I want this specific functionality: When someone visits mysite, he/she should automatically directed to: /Library/WebServer/mysite/joomla (and therefore /Library/WebServer/mysite/joomla/index.php) I don't want to achieve that functionality by putting a redirection code inside /Library/WebServer/mysite/index.php or /Library/WebServer/mysite/index.htm because that causes time delays (because of the redirection, of course) But in this case, the only proper way of achieving it seems to set DocumentRoot this way: DocumentRoot "/Library/WebServer/mysite/joomla" But when I set it that way, then the other folders (/wiki, /forum, etc.) are simply not served by Apache. To work around it, I put directives like: Alias /wiki /Library/WebServer/mysite/wiki .. Alias /forum /library/WebServer/mysite/forum and it did work actually the way I wanted. But... I still cannot use it that way because in this case I just couldn't manage to make the wiki use Short URLs (as described in link text) So, I have to set the DocumentRoot back to /Library/WebServer/mysite and shoud be able to assign /Library/WebServer/mysite/joomla as the "default directory" (my own terminology :) Can I do it in Apache? Is there any other way you might suggest? Thanks.

    Read the article

  • IIS and ASP.NET

    - by sam
    i'm trying to add asp.net feature on windows 7 i tried to turn it on using turn windows features on or off but it fails every time so i download web platform installer and try it that way and it fails also next i uninstall .net framework 4 restart again! and reinstall it and try again the previous steps but it fails the same i need this installed so i can view it on iis7 anyone know what i can do with this to get it working i've searched and searched and everything fails i get this error on the web platform installer Failed with 0x80070643 – Fatal Error during installation please help i cant do my work with out it working :( ok i did a few things now get this error Server Error in '/pulse' Application. Parser Error Description: An error occurred during the parsing of a resource required to service this request. Please review the following specific parse error details and modify your source file appropriately. Parser Error Message: Could not load type 'pulsesite.MvcApplication'. Source Error: Line 1: <%@ Application Codebehind="Global.asax.vb" Inherits="pulsesite.MvcApplication" Language="VB" % Source File: /pulse/global.asax Line: 1 Version Information: Microsoft .NET Framework Version:4.0.30319; ASP.NET Version:4.0.30319.1 i know its ust about changing the code but i'm not good with c# anyone know how?

    Read the article

  • For enabling SSL for a single domain on a server with muliple vhosts, will this configuration work?

    - by user1322092
    I just purchased an SSL certificate to secure/enable only ONE domain on a server with multiple vhosts. I plan on configuring as shown below (non SNI). In addition, I still want to access phpMyAdmin, securely, via my server's IP address. Will the below configuration work? I have only one shot to get this working in production. Are there any redundant settings? ---apache ssl.conf file--- Listen 443 SSLCertificateFile /home/web/certs/domain1.public.crt SSLCertificateKeyFile /home/web/certs/domain1.private.key SSLCertificateChainFile /home/web/certs/domain1.intermediate.crt ---apache httpd.conf file---- ... DocumentRoot "/var/www/html" #currently exists ... NameVirtualHost *:443 #new - is this really needed if "Listen 443" is in ssl.conf??? ... #below vhost currently exists, the domain I wish t enable SSL) <VirtualHost *:80> ServerAdmin [email protected] ServerName domain1.com ServerAlias 173.XXX.XXX.XXX DocumentRoot /home/web/public_html/domain1.com/public </VirtualHost> #below vhost currently exists. <VirtualHost *:80> ServerName domain2.com ServerAlias www.domain2.com DocumentRoot /home/web/public_html/domain2.com/public </VirtualHost> #new -I plan on adding this vhost block to enable ssl for domain1.com! <VirtualHost *:443> ServerAdmin [email protected] ServerName www.domain1.com ServerAlias 173.203.127.20 SSLEngine on SSLProtocol all SSLCertificateFile /home/web/certs/domain1.public.crt SSLCertificateKeyFile /home/web/certs/domain1.private.key SSLCACertificateFile /home/web/certs/domain1.intermediate.crt DocumentRoot /home/web/public_html/domain1.com/public </VirtualHost> As previously mentioned, I want to be able to access phpmyadmin via "https://173.XXX.XXX.XXX/hiddenfolder/phpmyadmin" which is stored under "var/www/html/hiddenfolder"

    Read the article

  • Installing mysqlnd for php 5.4.9 on CentOs 6.3

    - by kira423
    Okay let me get straight to the point, I am a complete noob, and have never done stuff like this at all, I have read tutorial after tuorial but I cant get anything to work. When I tried to install the rpm file I got this error rpm -Uvh ftp://ftp.pbone.net/mirror/rpms.famillecollet.com/enterprise/6/test/x86_64/php-mysqlnd-5.4.9-1.el6.remi.x86_64.rpm Retrieving ftp://ftp.pbone.net/mirror/rpms.famillecollet.com/enterprise/6/test/x86_64/php-mysqlnd-5.4.9-1.el6.remi.x86_64.rpm warning: /var/tmp/rpm-tmp.ez4vvd: Header V3 DSA/SHA1 Signature, key ID 00f97f56: NOKEY error: Failed dependencies: php-pdo(x86-64) = 5.4.9-1.el6.remi is needed by php-mysqlnd-5.4.9-1.el6.remi.x86_64 so I tried installing that rpm file and got this error rpm -ivh ftp://ftp.pbone.net/mirror/rrpms.famillecollet.com/enterprise/6/test/x86_64/php-pdo-5.4.6-1.el6.remi.x86_64.rpm Retrieving ftp://ftp.pbone.net/mirror/rrpms.famillecollet.com/enterprise/6/test/x86_64/php-pdo-5.4.6-1.el6.remi.x86_64.rpm curl: (9) Server denied you to change to the given directory error: skipping ftp://ftp.pbone.net/mirror/rrpms.famillecollet.com/enterprise/6/test/x86_64/php-pdo-5.4.6-1.el6.remi.x86_64.rpm - transfer failed I used the ftp links because I have no idea how else to get them to the server. I think I am getting overly frustrated with this, but I have to get this driver installed for any of my scripts to function correctly. Any help would be greatly appreciated!

    Read the article

  • On AWS EC2, Unable to run sudo command after modifying permissions to /usr folder

    - by Kayote
    All, We have searched quite a bit and a few of 'Eliah Kagan's' posts are great about getting access back to sudo. However, our server is on AWS EC2 & I am a complete newbie to this. We are trying to setup Cronjobs for backing up our server data. What we did: Using Putty, we created a script file: usr/share/site-db-backup/backupToS3.php, however, Ubuntu was not saving the changes we made as it reported we did not have permission as user 'Ubuntu'. Error details are: "Upload of file backupToS3.php was successful but error occurred while setting the permission &/ or timestamp. If the problem persists, turn on 'ignore permission errors' option. Permission denied. Error code: 3 Request code 9" So, we ran the command "sudo chmod -R a+rwx /usr" for granting permission to the folder 'usr'. However, now whatever sudo command is run, we get the error: "/usr/lib/sudo/sudoers.so must be only be writable by owner. fatal error, unable to load plugins." We are complete newbies to Ubuntu & EC2 so do need step by step guidance of how to get sudo back & successfully write to the Crontab script sitting in 'usr/' folder.

    Read the article

  • Can't upgrade NVIDIA GeForce 310M display driver on Acer Aspire 5745PG

    - by Emerson
    I've been for days already trying to update my video driver. I have an Acer Aspire 5745PG with a "NVIDIA GeForce 310M" board, and I was trying to run Sony Vegas video editor with Boris Continunn plugins. It happened that some of the plugins, like BCC Text Extrude wouldn't work, showing the message "Insufficient depth resolution to run Blue". I then read somewhere that updating the display driver would do the trick. That was when my nightmares started, I lost already good 3 nights trying to sort this out, without success :( The display driver that was before (and that I current have after restoring) was the version 8.16.11.8997. First thing I tried was downloading the 8.17.12.6619 driver directly from Acer, which was shown as the latest version from Acer website: http://support.acer.com/product/default.aspx?modelId=2466 Running it would say "Diver Package Failure - Setup failed to read the required Display Driver to be used with this package" I then tried directly the NVIDIA own driver, which the latest was version 296.10: http://us.download.nvidia.com/Windows/296.10/296.10-notebook-win7-winvista-64bit-international-whql.exe That gave me similar error message :/ So after some researching I found out that some people had the same issue and they had to change the configuration file to allow the installer to recognize this NVIDIA board: http://forums.nvidia.com/index.php?showtopic=222904 That topic said to look for the "Device Instance Id" property of the "NVIDIA GeForce 310M" display , which I couldn't find, instead I found the "Hardware Id", which seemed to be the right one. I followed the instructions and changed the inf file first for the Acer installation, and after for the NVIDIA own driver. It actually managed to go ahead with the installation in both instances, but the only thing I got was a black screen, while the computer still apeared to be running fine. I had to hard reset, and then it would come back with generic vga driver. I could only get my display back using the recovery function. I imagine thousands of this notebook was sold, and it can't have its driver updated?? Could someone help me with this?? Thanks Echo

    Read the article

< Previous Page | 2088 2089 2090 2091 2092 2093 2094 2095 2096 2097 2098 2099  | Next Page >