Search Results

Search found 12582 results on 504 pages for 'remove'.

Page 374/504 | < Previous Page | 370 371 372 373 374 375 376 377 378 379 380 381  | Next Page >

  • Correct MySQL username/password, but getting Access Denied error when run from script

    - by Nick
    I'm currently trying to run the following command from within a shell script. /usr/bin/mysql -u username -ppassword -h localhost database It works perfectly fine when executed manually, and not from within a script. When I try to execute a script that contains that command, I get the following error: ERROR 1045 (28000) at line 3: Access denied for user 'username'@'localhost' (using password: YES) I literally copied and pasted the working command into the script. Why the error? As a sidenote: the ultimate intent is to run the script with cron. EDIT: Here is a stripped down version of my script that I'm trying to run. You can ignore most of it up until the point where it connects to MySQL around line 19. #!/bin/sh #Run download script to download product data cd /home/dir/Scripts/Linux /bin/sh script1.sh #Run import script to import product data to MySQL cd /home/dir/Mysql /bin/sh script2.sh #Download inventory stats spreadsheet and rename it cd /home/dir /usr/bin/wget http://www.url.com/file1.txt mv file1.txt sheet1.csv #Remove existing export spreadsheet rm /tmp/sheet2.csv #Run MySQL queries in "here document" format /usr/bin/mysql -u username -ppassword -h localhost database << EOF --Drop old inventory stats table truncate table table_name1; --Load new inventory stats into table Load data local infile '/home/dir/sheet1.csv' into table table_name1 fields terminated by ',' optionally enclosed by '"' lines terminated by '\r\n'; --MySQL queries to combine product data and inventory stats here --Export combined data in spreadsheet format group by p.value into outfile '/tmp/sheet2.csv' fields terminated by ',' optionally enclosed by '"' lines terminated by '\r\n'; EOF EDIT 2: After some more testing, the issue is with the << EOF that is at the end of the command. This is there for the "here document". When removed, the command works fine. The problem is that I need << EOF there so that the MySQL queries will run.

    Read the article

  • Is there a limit setting a php_admin_value in php-fpm?

    - by PeeHaa
    I am trying to set a large value in the configuration of a pool in php-fpm, but at some point it just doesn't start anymore. php_admin_value[disable_functions] = dl,exec,passthru,shell_exec,system,proc_open,popen,curl_exec,curl_multi_exec,parse_ini_file,show_source,pcntl_exec,include,include_once,require,require_once,posix_mkfifo,posix_getlogin,posix_ttyname,getenv,get_current_use,proc_get_status,get_cfg_va,disk_free_space,disk_total_space,diskfreespace,getcwd,getlastmo,getmygid,getmyinode,getmypid,getmyuid,ini_set,mail,proc_nice,proc_terminate,proc_close,pfsockopen,fsockopen,apache_child_terminate,posix_kill,posix_mkfifo,posix_setpgid,posix_setsid,posix_setuid,fopen,tmpfile,bzopen,gzopen,chgrp,chmod,chown,copy,file_put_contents,lchgrp,lchown,link,mkdi,move_uploaded_file,rename,rmdi,symlink,tempnam,touch,unlink,iptcembed,ftp_get,ftp_nb_get,file_exists,file_get_contents,file,fileatime,filectime,filegroup,fileinode,filemtime,fileowne,fileperms,filesize,filetype,glob,is_di,is_executable,is_file,is_link,is_readable,is_uploaded_file,is_writable,is_writeable,linkinfo,lstat,parse_ini_file,pathinfo,readfile,readlink,realpath,stat,gzfile,create_function When trying to restart php-fpm it fails with the following message: Stopping php-fpm: [ OK ] Starting php-fpm: [20-Oct-2013 22:31:52] ERROR: [/etc/php-fpm.d/codepad.conf:235] value is NULL for a ZEND_INI_PARSER_ENTRY [20-Oct-2013 22:31:52] ERROR: Unable to include /etc/php-fpm.d/codepad.conf from /etc/php-fpm.conf at line 235 [20-Oct-2013 22:31:52] ERROR: failed to load configuration file '/etc/php-fpm.conf' [20-Oct-2013 22:31:52] ERROR: FPM initialization failed [FAILED] When I remove the last disabled function (create_function) it start again. I also tried with other functions, but this gives the same error so it's not related to the create_function function. The string currently is just over 1KB in size so it looks like I have hit a limit here? Is my assumption correct? Is there a way to overcome this limit? I also tried to add another php_admin_value[disable_functions] underneath it (hoping it would be appended), but that didn't work (it just used the first one).

    Read the article

  • Can't install xclip on Ubuntu 10.10

    - by wildster
    I'm trying to load an SSH key to Github from a new machine and this command is not working: sudo apt-get install xclip Reading package lists... Done Building dependency tree Reading state information... Done Package xclip is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source E: Package xclip has no installation candidate when I try: sudo aptitude install xclip Reading package lists... Done Building dependency tree Reading state information... Done Reading extended state information Initializing package states... Done No candidate version found for xclip No candidate version found for xclip The following partially installed packages will be configured: synaptics-dkms 0 packages upgraded, 0 newly installed, 0 to remove and 0 not upgraded. Need to get 0B of archives. After unpacking 0B will be used. Writing extended state information... Done Setting up synaptics-dkms (1.1.1) ... Loading new synaptics-1.1.1 DKMS files... Error! Cannot locate /usr/src/synaptics-1.1.1.dkms.tar.gz. File does not exist. dpkg: error processing synaptics-dkms (--configure): subprocess installed post-installation script returned error exit status 2 Errors were encountered while processing: synaptics-dkms E: Sub-process /usr/bin/dpkg returned an error code (1) A package failed to install. Trying to recover: Setting up synaptics-dkms (1.1.1) ... Loading new synaptics-1.1.1 DKMS files... Error! Cannot locate /usr/src/synaptics-1.1.1.dkms.tar.gz. File does not exist. dpkg: error processing synaptics-dkms (--configure): subprocess installed post-installation script returned error exit status 2 Errors were encountered while processing: synaptics-dkms Reading package lists... Done Building dependency tree Reading state information... Done Reading extended state information Initializing package states... Done Any idea how I can install this? Mucho thanks in advance

    Read the article

  • Using URL rewrite module for http to https redirect

    - by johnnyb10
    Following ruslany's suggestion on the URL Rewrite Tips page here, I'm trying to use URL Rewrite to redirect http:// requests for my site to https://. I've written and tested the rule using a test site I set up, and so now the final piece is to create a second site (http) to redirect to my https site. (I need to use a second site because I don't want to uncheck the "Require SSL encryption" checkbox on my existing site.) I'm an IIS newbie so my question is: how do I do this? Should I create a site with the same name and host header, only it will be bound to http? Will IIS let me create a site with the same name? I don't want to screw anything up with my existing site (which is a SharePoint site, currently used by external users). That site currently has http and https bound to it. So my assumption is that, using ISS (not SharePoint), I will create a new site (http only) with the same name and host header as my existing site, and add the URL Rewrite rule to the http site. And then I guess I should remove the http binding from my existing site? Does that seem correct? Any advice, gotchas, etc., would be appreciated. Thanks.

    Read the article

  • Calendar booking issue - Exchange 2003 and 2010

    - by NaOH
    In our organization we are running Exchange 2003 and 2010 simultaneously, with the hopes of migrating everyone to Exchange 2010 sometime within the next few months. Everyone is using Outlook 2010. Recently, we had an issue with transaction log storage on the Exchange 2003 server. This was resolved, but for some reason no meeting rooms on the Exchange 2003 server will automatically book meetings any longer. I have played around with this for a while, changing calendar permissions, turning resource scheduling off and back on, etc. No dice. My next step was to try migrating a resource to the Exchange 2010 server. After doing so, and setting it up as a Room, enabling Auto-Accept and removing the EnableDirectBooking registry entry on my PC, I can book a meeting with this room. If EnableDirectBooking is enabled, I get an error message stating: "Meeting Room" declined your meeting because it is recurring. You must book each meeting separately with this resource. This is despite the fact that the meeting I'm attempting to create has no recurrence. Now, I have also created a new test Room from scratch on the Exchange 2010 server, and I can book a meeting with this Room regardless of whether or not I have the EnableDirectBooking reg entry in place. All users here have this registry entry, and I'd rather not have to figure out how to push something out to remove it from every PC. Rather, I'd like to figure out what's different between the configurations of these two meeting rooms so that I could book a meeting room regardless of whether EnableDirectBooking is enabled or not. Any ideas, anyone? Thanks!

    Read the article

  • Windows VPN for remote site connection drawbacks

    - by Damo
    I'm looking for some thoughts on a particular way of setting up a estate of machines. We have a requirement to install machines into unmanned, remote locations. These machines will auto login and perform tasks controlled from a central server. In order to manage patching, AV, updates etc I want these machines to be joined to a dedicated domain for this estate. Some of the locations will only have 3G connectivity (via other hardware), others will be located on customer premises in internal networks. The central server (of ours) and the Domain Controller will be on a public WAN. I see two ways of facilitating this. Install a router at each location and have a site to site VPN between the remove device and the data centre where the servers are location Have the remote machine dial up and authenticate via a Windows VPN connection to the DC via RAS Option one is more costly to setup and has a higher operational cost. It also offers better diagnostics if the remote PC goes down. Option two works well but is solely dependent on the VPN connection been made before any communication can be made to the remote machine. In a simple test, I can got a Windows 7 machine to dial a VPN prior to authentication to a domain, then automatically login to the machine using domain credentials. If the VPN connection drops, it redials. I can also create a timed task to auto connect every hour in case of other issues. I'd like to know, why (if at all) is operating a remote network of devices which are located in various out of band locations in this way a bad idea? Consider 300-400 remote machines all at different sites. I'd rather have 400 VPN connections to a 2008 server than 400 routers, however I'd like to know other opinions on this.

    Read the article

  • BIOS not detecting working SATA hard drive.

    - by Evan
    Some time ago my power supply died. It's a long story from then till now, but the important bit is that I ended up with a new hard drive and a new power supply. I tested to see if my original hard drive was still alive, and it booted and worked perfectly until I turned it off. When I started it again it would not boot. I bought new SATA cables, assuming that the one I had was not seating properly (it was cheap and wobbly), but no dice. Upon start-up I am presented with a message telling me to insert boot media into the selected drive or add a drive and restart. Neither the new or the old drive is detected by BIOS, my Vista install disk, or from my bootable Linux USB drive. When I remove all of the RAM the computer ceases outputting visual information, and upon reinstalling the ram and starting up again gives me a "failed overclock" error. So, does anyone have an idea as to what might be going on? I'm completely lost at this point.

    Read the article

  • Virus on site but can't find where

    - by Rob
    WARNING! THIS IS ABOUT A VIRUS ON MY SITE. IT APPEARS IT HAS BEEN THERE FOR SOMETIME AND I'VE HAD NO PROBLEMS. BUT PLEASE BE CAREFUL. READ EVERYTHING I SAY AND SEE IF YOU CAN HELP ME WITHOUT VISITING THE LINK. AVG PICKS UP ON IT AND BLOCKS IT, MCAFEE DOES NOT. Sorry about the warning, obviously i'm not here to get anyone infected or anything like that. Basically I run the website sortitoutsi dot net. Ages ago I got a virus on my computer, they got hold of my FTP passwords and added some lines of javascript to the top of my site. I removed them and believe it was fixed. However i'm using the "Web Developer" extension for Firefox and chose to view all javascript on my page and find there are various links to horrible urls such as: gittigidiyor-com.excite.co.jp.webmasterworld-com.eastmusicdirect.ru:8080/aboutus.org/aboutus.org/google.com/skycn.com/torrents.ru.php and gittigidiyor-com.excite.co.jp.webmasterworld-com.eastmusicdirect.ru:8080/index.php?jl= These terms do not appear anywhere. In the source code, in any of the javascript or the css. I also can't see that there are any rogue images that I don't recognise either. So i've no idea where this javascript is coming from. Can anyone suggest how I can find references to these links and remove them? I can see them both in the Web Developer firefox extension and in the net tab using Firebug. Any help would be greatly appreciated

    Read the article

  • Exchange 2010 CAS Removal == Broken???

    - by Doug
    Hi there, I recently upgraded to exchange 2010 and have a setup with 2 of my servers running CAS roles - EXCH01, EXCH02 EXCH02 just happens to also have a mailbox role where a lot of the users sit EXCH01 is my front facing CAS server, and is facing the net with SSL etc and incoming mail moving through it as a hub transport layer server as well. As i was trying to lean things out in my VM environment i removed the CAS role from EXCH02 and all hell broke loose. All the mail users that have a mailbox on EXCH02 had their homeMTA set to a deleted items folder in AD and so did their msExchHomeServer properties. After a complete battle i manually fixed these issues to the oldvalues, and in the mean time reinstalled CAS on EXCH02 (management was going nuts with out OUTLOOK working so i just put things back the way they were in a hurry.) I must add as a strange thing on the side, that before i reset these to point at EXCH02 i tried EXCH01 and it failed. I still want to remove the CAS role from EXCH02 as it should really not have it (error on install/planning on my part) and would have thought that this would not cause the issues it did, i assumed that the fact that there was another CAS server in the admin group all would be good. Was i wrong in my assumption? and what can i do to complete this successfully the second time round? Do i need to rehome all the mailboxes to the CAS server? is this a bug in the role uninstall?

    Read the article

  • Best usage for a laptop being used as a desktop without removable batteries

    - by Senseful
    After reading the information on http://batteryuniversity.com, I realize that one of the best ways to permanently damage a lithium ion battery is to use the battery at a high temperature while it's fully charged. This is exactly what happens when you use the computer as if it were a desktop computer, since leaving it plugged in will keep the battery at 100% and using the computer will heat up the battery. This is why it's recommend to remove the battery from your laptop if you are using it is this scenario. My question is what would you do if the laptop doesn't have removable batteries (e.g. a MacBook Pro)? Should I use some kind of charge cycle such as: charge to 80%, unplug the power chord, use the laptop until it reaches 20%, then repeat the cycle by charging to 80% again? If so, which values should I use instead of 80% and 20%? (I think charging to 80% is better than 100% because of the damage that a hot battery at 100% can do, but I just made the figure 80% up, and I'm sure there's a better number to strive for which is backed by science.) I've read many of the articles on batteryuniversity.com, but couldn't find anything pertaining to this. Update: What about doing something like charge (or discharge) it to 50%, then plug it in and turn on settings which use the battery as much as possible (e.g. brightness all the way up, wi-fi on, etc.), in order to try to maintain the battery at 50% (i.e. the rate it is charging is the same as it is discharging). This will probably heat up the battery, but would make it so you don't need to constantly plug and unplug the laptop. The one bad thing is that you are taking up more charge cycles which would decrease the battery life, thus I'm not sure this is a good idea.

    Read the article

  • Using URL rewrite module for http to https redirect

    - by johnnyb10
    Following ruslany's suggestion on the URL Rewrite Tips page here, I'm trying to use URL Rewrite to redirect http:// requests for my site to https://. I've written and tested the rule using a test site I set up, and so now the final piece is to create a second site (http) to redirect to my https site. (I need to use a second site because I don't want to uncheck the "Require SSL encryption" checkbox on my existing site.) I'm an IIS newbie so my question is: how do I do this? Should I create a site with the same name and host header, only it will be bound to http? Will IIS let me create a site with the same name? I don't want to screw anything up with my existing site (which is a SharePoint site, currently used by external users). That site currently has http and https bound to it. So my assumption is that, using ISS (not SharePoint), I will create a new site (http only) with the same name and host header as my existing site, and add the URL Rewrite rule to the http site. And then I guess I should remove the http binding from my existing site? Does that seem correct? Any advice, gotchas, etc., would be appreciated. Thanks.

    Read the article

  • Cut (smart edit) .mts (AVCHD Progressive) files un Ubuntu Lucid

    - by pts
    I have a bunch of .mts files containing AVCHD Progressive video recorded by a Panasonic camera, and I need software on Ubuntu Lucid with which I can remove the boring parts, and concatenate the interesting parts, all this without reencoding the video stream. It's OK for me to cut at keyframe boundary. If Avidemux was able to open the files, it would take about 60 hours of work for me to cut the files. (At least that was it last time I tried with similar videos, but of a file format supported by Avidemux.) So I need a fast, powerful and stable video editor, because I don't want that 60 hours of work go up to 240 or even 480 hours just because the tool is too slow or unstable or has a terrible UI. I've tried Avidemux 2.5.5 and 2.5.6, but they crash trying to open such a file, even if I convert the file to .avi first using mencoder -oac copy -ovc copy. mplayer can play the files. I've tried Avidemux 2.6.0, which can open the file, but it cannot jump to the previous or next keyframe etc. (if I make it jump to the next keyframe, and then to the previous keyframe, it doesn't end up at the original keyframe, sometimes displays an error etc.). Also I'm not sure if Avidemux 2.6.x would let me save the result without reencoding. I've tried Kdenlive 0.7.7.1, but playback is very choppy, and it cannot play audio at all (complaining that SDL cannot find the device; but many other programs on the system can play audio). It would be a pain to work with. I've tried converting the .mts file to .mkv using ffmpeg -i input.mts -vcodec copy -sameq -acodec copy -f matroska output.mkv, but that caused too much visible distortions in the video in both mplayer and Avidemux. I've tried converting the .mts file with TsRemux.exe, but Avidemux 2.5.x still can't open that file. Is there another program to cut and concatenate the files? Is there a preprocessor which would create a file (without reencoding the video) on which Avidemux wouldn't crash?

    Read the article

  • MS Excel 2010 - Using DSN + 32 bit drivers

    - by Kristiaan
    I need some advice as im running into a problem and so far i have been unable to find a solution. We have a set of reports developed in MS Excel that use DSN file to connect to data sources to retrieve data, these work fine on 32 / 64bit systems, however we are moving to a terminal server environment using windows 2008 R2 64Bit. The reports fail to run using the DSN's within this environment if we only have the 32bit drivers installed and configured in the ODBC settings, the minute we install the 64Bit drivers the software works. Is there a way / Method of getting Excel or the DSN file to NOT use the 64Bit driver, but force it to use the 32bit driver. ANSWERED - But due to low user score i cannot "answer" my own question... Sadly there is no way to-do what i want to-do, without a lot of very nasty and not 100% perfect reg hacks. If you need to access 32bit ODBC data sources the application in question has to be 32Bit. here is a link to just one forum post i found relating to this type of problem, it appears the only way i would be able to accomplish this is to remove the 64bit version of office and install the 32bit version instead of it. http://social.msdn.microsoft.com/Forums/en-US/accessdev/thread/5108f337-f06a-4518-afe3-d3c1abd040ef/

    Read the article

  • diskmgmt.msc: Cannot delete volume from USB

    - by Notinlist
    I have an USB drive with about 8GB of size. It has a single partition of size 169MB. Don't know why, I got it that way. I wanted to delete this small (FAT32) partition and create a single NTFS volume on it. First, I noticed that the "Delete volumme" option is disabled (grayed out). I then tried "Change drive letter and paths..." and removed "F:", that way I made sure that there are no open files on it. The "Delete volume" was still disabled. Then I got suspicious, and right clicked on the "Unallocated" area and I noticed that I did not have any useful option. All "New * volume" items are disabled. I exited from diskmgmt.msc, ran a cmd.exe with administrator privileges, ran the diskmgmt.msc from it, same experiences. Why can't i do anything with this disk? I've read some advices about downloading some alternative free software, but I rather not do it if possible. I still hope that Windows 7 Enterprise 64bit alone can reinitialize an USB drive without external help. I also cannot do anything with my other 8GB pendrive. It's all an NTFS volume, I tried to delete it, but the option is disabled here too. Maybe I have some settings somewhere that prevents my from partitioning USB disks. (I have the freedom to remove my D: partition which is the second - not counding the "System reserved" - on my SSD disk.)

    Read the article

  • General name for Macs' operating system

    - by andy124
    First of all, I hope that my question is fairly suitable for this site. I have a website where I would like to write articles about some operating systems. Therefore, I have created a main category called "Operating systems". Within a subcategory, I would like to write articles about Apple's operating system that is running on Macs. However, I do not know what to name this category. I have always thought the name was just OS X, but come to think about it, the "X" is actually part of the version (10). Therefore I cannot exactly call my category OS X, because what about when OS 11 is released in a few years? And since Apple has gone from Mac OS X to just OS X, then I cannot use "Mac OS". And, if I remove the X from OS X, then I only have "OS" left, which does not seem so proper. I am really looking for a meaningful all-round name for the Macs' operating system that does not involve the versioning. I was thinking about just calling the category "Mac", but that is not precise either - but perhaps the closest I can get?

    Read the article

  • Procurve Primary VLAN

    - by fukawi2
    I'm trying to depreciate usage of VLAN 1 on my ProCurve switches; 1 is unused. I understand that VLAN 1 must exist, but I want to remove it from all ports, especially trunks between switches. The problem I have is that stacking does not seem to work without VLAN 1. I have changed the primary VLAN and management VLAN on all the switches: (config)# primary-vlan 42 (config)# management-vlan 42 (config)# no vlan 1 untagged 25 Port 25 is the link between the 2 switches I'm testing with; the stack master and a member switch; I only want tagged traffic between the switches, no untagged frames. show stacking on the master shows all members as "UP" but I can not telnet any of them: Telnet failed: Connection timed out. All switches have manually assigned (static) IP addresses on VLAN 42, and all exist in the same /25 subnet, as does my desktop. I can telnet the switches directly from my desktop to the individual switch IP addresses, just not from the master switch. Do I need to reboot the switches to have the primary-vlan change take effect? Or is there something else I'm missing?

    Read the article

  • Windows 2003 DC to Windows 2008 R2 DC with same name and same IP

    - by TheCleaner
    Environment = Windows 2003 native domain with 8 DCs I've got an old domain controller that is running 2003, CA Enterprise role, DHCP, DNS, a few GPO scripts that point to shares on it, and some other minor functions. All our servers point to it as their primary DNS, and there's lots of references to its IP or name throughout the domain at this point (8+ years later). I really don't feel like manually changing all of this, it would be a pretty massive undertaking. I want to follow this guide: http://msmvps.com/blogs/acefekay/archive/2010/10/09/remove-an-old-dc-and-introduce-a-new-dc-with-the-same-name-and-ip-address.aspx to hopefully end up with basically an "in-place upgrade" so to say. I considered just doing a P2V of the box, but we don't really want to keep it around running 2003 to be honest. I also considered using a CNAME and adding a 2nd IP (the old one) but again, it seemed like it would be cleaner using the attached link. My actual question: Any gotchas or big caution signs when doing what the link suggests? Anyone gone down this road and have advice on how to proceed?

    Read the article

  • Zpool disk failure - Where am I at?

    - by JT.WK
    After checking the status of one of my zpools today, I was faced with the following: root@server: zpool status -v myPool pool: myPool state: ONLINE status: One or more devices has experienced an unrecoverable error. An attempt was made to correct the error. Applications are unaffected. action: Determine if the device needs to be replaced, and clear the errors using 'zpool clear' or replace the device with 'zpool replace'. see: http://www.sun.com/msg/ZFS-8000-9P scrub: resilver completed after 3h6m with 0 errors on Tue Sep 28 11:15:11 2010 config: NAME STATE READ WRITE CKSUM myPool ONLINE 0 0 0 raidz1 ONLINE 0 0 0 c6t7d0 ONLINE 0 0 0 c6t8d0 ONLINE 0 0 0 spare ONLINE 0 0 0 c6t9d0 ONLINE 54 0 0 c6t36d0 ONLINE 0 0 0 c6t10d0 ONLINE 0 0 0 c6t11d0 ONLINE 0 0 0 c6t12d0 ONLINE 0 0 0 spares c6t36d0 INUSE currently in use c6t37d0 AVAIL c6t38d0 AVAIL errors: No known data errors From what I can see, c6t9d0 has encountered 54 write errors. It seems as though it has automatically resilvered with the spare disk c6t36d0, which is now currently in use. My question is, where exactly am I at? Yes the 'action' tells me to determine whether or not the disk needs replacing, but is this disk currently still in use? Can I replace / remove it? Any explanation would be much appreciated as I'm quite new to this stuff :) update: After following the advice from C10k Consulting, ie detaching: zpool detach myPool c6t9d0 and adding as a spare: zpool add myPool spare c6t9d0 It appears as though all is well. The new status of my zpool is: root@server: zpool status -v myPool pool: myPool state: ONLINE scrub: resilver completed after 3h6m with 0 errors on Tue Sep 28 11:15:11 2010 config: NAME STATE READ WRITE CKSUM muPool ONLINE 0 0 0 raidz1 ONLINE 0 0 0 c6t7d0 ONLINE 0 0 0 c6t8d0 ONLINE 0 0 0 c6t36d0 ONLINE 0 0 0 c6t10d0 ONLINE 0 0 0 c6t11d0 ONLINE 0 0 0 c6t12d0 ONLINE 0 0 0 spares c6t37d0 AVAIL c6t38d0 AVAIL c6t9d0 AVAIL errors: No known data errors Thanks for your help c10k consulting :)

    Read the article

  • Linux Mint Constantly freezing on Dell XPS L502X

    - by Josh
    I recently partitioned my hard drive to dual boot the existing Windows 7 with Linux Mint because I am tired of using Windows, especially the lack of terminal. I want to eventually remove Windows 7 and just run it from a VM within Linux Mint, but I want to make sure that I like the Mint before going all in. I ran Linux Mint on a VM inside Windows for a while, enjoyed it, and never had any issues with it. Since installing on my hard drive it has started freezing every 5-10 minutes, and the only way to get it back is to either power down, or close the lid and reopen once it sleeps. I've also tried running Ubuntu on dual boot in the past, and while it never froze, the battery life was terrible, and the fan was constantly running. I'm experiencing the same battery/fan problem with Mint, which doesn't make sense to me, as Linux should be lighter on the CPU than windows. If I had to guess I'd say it's probably a driver thing, with my video card or fan or something. My battery life in Windows is ~2 hours and its about 40 minutes in Linux. At this point, that is even if my laptop doesn't freeze before then. On a less important note, I also have an intel Centrino 6150 WiMax card that I'd like to be able to use, but that won't register on the Linux system either. I have tried downloading drivers for both of these, but neither have solved my problems. I'm definitely getting frustrated and am getting close to giving up on Linux even though I dread working on a Windows machine.

    Read the article

  • Certificate enrollment request chain not trusted

    - by makerofthings7
    I am working on a MSFT lab for Direct Access, and need to create a Web certificate. The instructions ask be to do the following: On EDGE1, click Start, type mmc, and then press ENTER. Click Yes at the User Account Control prompt. Click File, and then click Add/Remove Snap-ins. Click Certificates, click Add, click Computer account, click Next, select Local computer, click Finish, and then click OK. In the console tree of the Certificates snap-in, open Certificates (Local Computer)\Personal\Certificates. Right-click Certificates, point to All Tasks, and then click Request New Certificate. Click Next twice. On the Request Certificates page, click Web Server, and then click More information is required to enroll for this certificate. On the Subject tab of the Certificate Properties dialog box, in Subject name, for Type, select Common Name. In Value, type edge1.contoso.com, and then click Add. Click OK, click Enroll, and then click Finish. In the details pane of the Certificates snap-in, verify that a new certificate with the name edge1.contoso.com was enrolled with Intended Purposes of Server Authentication. Right-click the certificate, and then click Properties. In Friendly Name, type IP-HTTPS Certificate, and then click OK. Close the console window. If you are prompted to save settings, click No. In production, our company has overridden the Web Server template and it doesn't seem to be issuing certificates with the full CA chain. When I look at the issued certificate properties then both tiers of the 2 tier CA hierarchy are missing. How can I fix this? I'm not sure where to look outside the GUI.

    Read the article

  • Office 365 domain federation conversion failed

    - by Matt Bear
    We're doing things backwards, we have an established o365 domain, with 400+ users, and are just now deploying local AD, and ADFS for SSO. Last night, after configuring my servers, I ran the powershell command convert-MSOLdomaintofederated to convert the xxx.com vanity domain to federated, it errored out with an unspecified error(Microsoft ADFS support said the error has to do with the default password settings being changed.) And when I run convert-MSOLdomaintostandard, it comes back with the domain is already standard. Also in the o365 portal it shows the domain as standard, however it is trying to process login attempts as if it were a federated domain. I've spent 5 hours total on the phone with Microsoft, and it has been escalated to their engineering department for resolution, sometime within the next few days... I need it yesterday. From what we can gather, the conversion process started, error out, changed some of the internal configurations to federated, but left the description as standard.(if that makes since). So its in a weird limbo, where its in both modes but neither at the same time. Currently, the only way to fix it is to remove the vanity domain, and re-add it. I need a way to dissociate the user accounts from xxx.com domain to allow its removal. Removal of all the users themselves is not an option.

    Read the article

  • emacs ORG-mode "headless" export-as commands?

    - by Seamus
    When I use org-export-as-latex or org-export-as-html orgmode turns my buffer into a .tex file or .html file. But I don't want all the extra junk that it adds to the file: I want to handle the documentclass and everything myself and just \input the org mode generated file. (Or the analogous things for html with php). So if my org file just has: * Section - Stuff - Things I want the org mode command to output just \section{Section} \begin{itemize} \item Stuff \item Things \end{itemize} Without any of the extra \tableofcontents junk that ORG adds to it. I know I could define my own kind of #+LaTeX_CLASS that could add the packages I want and so on, but I don't want to do things that way (and that wouldn't remove the \maketitle or the spurious \vspace* that ORG insists on inserting. Is there a command to do this "headless" parsing and converting? I had a look but it's not obvious from the documentation. Presumably some low level ORG command is doing the parsing and converting I want, but I couldn't find what it was called from looking at the docs and C-h pages... This is not a question about HTML or LaTeX but about emacs ORG mode. So don't kick it off to some other site...

    Read the article

  • tmpreaper, --protect and a non-root user

    - by nsg
    Hi, I'm a little confused. I have a download directory that I want to remove all files older then 30 days with tmpreaper. Just one problem, the directory in question is a separate partition with a lost+found directory, of course I need to keep it so I added --protect 'lost+found', the problem is that tmpreaper outputs: error: chdir() to directory 'lost+found' (inode 11) failed: Permission denied (PID 30604) Back from recursing down `lost+found'. Entry matching `--protect' pattern skipped. `lost+found' I have tried with other pattern like lost* and so on... I'm running tmpreaper as a non-root user because there is no reason for superuser privileges because I own all files (except lost+found). Are I'm forced to run tmpreaper as root? Or are my shell-skills not as good as I thought? I guess the problem is: tmpreaper will chdir(2) into each of the directories you've specified for cleanup, and check for files matching the <shell_pattern> there. It then builds a list of them, and uses that to protect them from removal. Any thought and/or advice? Edit: The command I'm trying to run is something like $ /usr/sbin/tmpreaper -t --protect 'lost+found' 30d /mydir 1> /dev/null error: chdir() to directory `lost+found' (inode 11) failed: Permission denied Edit 2: I read the source code for tmpreaper-1.6.13 and found this if (safe_chdir (dirname)) exit(1); and if (chdir (dirname)) { message (LOG_ERROR, "chdir() to directory `%s' (inode %lu) failed: %s\n", dirname, (u_long) sb1.st_ino, strerror (errno)); return 1; } So it seems tmpreaper needs to be able to chdir in to all directories, ignored or not. I see two options left Run tmpreaper as root Move the download directory Find a alternative tool (tmpwatch?) I will give it some more research before i make a choice.

    Read the article

  • Arch Linux: eth0 no carrier - network fails at boot

    - by user905686
    The problem My computer is connected to a network where dhcp is required. So my network configuration in /etc/rc.conf looks like interface=eth0 address= netmask= broadcast= gateway= My deamons are DAEMONS=(!hwclock syslog-ng network netfs crond ntpd) With this configuration, Arch hangs at boot a long time at "Network" (Still it says "[done]", but after boot I have no connection). I found out two workaround: Workaround 1 remove network from deamons run mii-tool --reset eth0 and dhcpcd eth0 after boot (somehow it does not work when placing these commands in /etc/rc.local. Then dhcp work very quickly (because of the reset!). Before executing the first command, ip link show eth0 has "NO CARRIER" in output. Afterwards, it doesn´t. (Also, mii-tool first shows "no link", afterwards eth0: 10 Mbit, half duplex, link ok. Workaround 2 Change network configuration to interface=eth0 address=x.y.z.21 netmask=255.255.255.0 broadcast=xxx.y.z.255 gateway=x.y.z.254 whereas x, y, z build the specific adresses of the network (Though dhcp is used, I get a static ip). Add the commands mii-tool --reset eth0 and dhcpcd eth0 to /etc/rc.local Now network starts quickly at boot (though I don´t know if successfully), the commands in /etc/rc.local are executed and the connection is fine after login. What to do? So the problem seems to be that dhcpcd stucks at "wating for carrier" or sth. I do not like the workaround, because some deamons need network (though they seem to start). What can I do to have eth0 ready for dhcp at boot? Or is there another problem?

    Read the article

  • Changing a set-cookie header using mod_rewrite/mod_proxy

    - by olrehm
    I have a bunch of cgi scripts, which are served using HTTPS. They can only be reached on the intranet, not from the outside. They set a cookie with the attribute 'Secure', so that it can only be send via HTTPS. There is also a reverse proxy to one of these scripts, unfortunately using plain HTTP. When a response comes in from my cgi-script with a secure cookie, it is not being passed on via HTTP (after all, that is what that attribute is for). I need however, an exception to this rule. Is it possible to use mod_rewrite/mod_proxy or something similar, to change the set-cookie header in the response coming from my cgi script and remove the Secure, such that the cookie can be passed back to the user using the unsafe HTTP connection? I understand that this defeats the purpose of the Secure in the first place, but I need this as a temporary work around. I have searched the web and found how to add a set-cookie header using mod_rewrite, and I have also found how to retrieve the value of a cookie coming from the client in a cookie header. What I have not yet found is how to extract the set-cookie header received in the response of a script I am proxying for. Is that possible? How would I do that? Ole

    Read the article

< Previous Page | 370 371 372 373 374 375 376 377 378 379 380 381  | Next Page >