Search Results

Search found 22701 results on 909 pages for 'missing features'.

Page 684/909 | < Previous Page | 680 681 682 683 684 685 686 687 688 689 690 691  | Next Page >

  • During Vista Repair - No operating system is listed.

    - by Jack Marchetti
    After a Windows update, my brother's Gateway computer loads to the "Step 3 of 3: 0%" and reboots. Safe Mode does not work. I placed a Vista DVD in the drive, and re-booted. (Note, this is my Vista DVD, not the Recovery/System disc that would come with a computer. Gateway does not give you CD's anymore. I believe they store recovery on a partition, but that partition has been wiped out). I chose "Repair Your Computer" I get a dialog box, but no operating system is listed. I'm then prompted to "Load Drivers". What drivers am I supposed to be loading here and where from? I placed a CD in the drive to "load drivers" but I don't see my DVD drive listed. All I saw where X:/Sources along with several Removable Media slots that were empty. On another screen I tried Startup Repair, which didn't do anything. I attempted to use System Restore - but it doesn't detect the hard drive. I'm guessing that I'm missing some sort of SATA driver and that is why the hard disk is not being found. Any ideas on this?

    Read the article

  • How to access Windows Server 2008 R2 file shares from a different subnet

    - by Lloyd Cotten
    We have a couple of severs that used to be Windows Server 2003 that we recently upgraded to Windows Server 2008 R2. A couple of details to set the situation up: We wiped the OS and re-installed. These servers are on one subnet (172.16.x.x) and we are trying to access some file shares on them from another subnet (10.34.x.x). Firewall is disabled on these servers. Trying to access with UNC "\172.16.x.x\sharename" and net use \172.16.x.x However, we're having problems doing this. We are getting "The network path was not found". Here's some of the things we've tried so far and the result: Tried accessing the share from other (non-2008) servers on the same subnet... Success! Ping servers from different subnet... Success! Telnet connection into port 139 from different subnet... Success! Took a scan through Local Security Policies to see if something obvious needed to be enabled / disabled / configured... Fail I'm not sure where to look next. I know that the router between the two subnets is locked down pretty good, but this did work for our 2003 servers. Has anything changed in the way of ports used for UNC / file share access in 2008? Maybe I'm missing some security policy setting? Hoping somebody can take pity on a poor programming guy that can't figure out something really simple. :-) Thanks!

    Read the article

  • IIS 6 getting "Page Not Found" after applying SSL

    - by Dominic Zukiewicz
    I am setting up SSL certificates on a development environment using IIS 6 on W2k3. I have a directory called login with a single page login.asp which I would like only viewable over SSL. So before installing or applying SSL permissions, the page is viewable through a browser. I can browse the page and it redirects etc. and all is good. However Basic Authentication is Base64 encoded so I want to secure the traffic from this page only. I have created a dummy certificate in makecert, installed it and added it to IIS. IIS is happy that it is trusted. I have selected the directory of login and child files to "Require SSL channel". When I refresh my browser on login/login.asp I get a "404: Page Not Found" in IE 8. So 2 issues here The page is now unviewable when using HTTPS. They must manually type the HTTPS (minor inconvenience for now) If I turn off "Require SSL Channel" from IIS, it works again. What part of the process am I missing as I have followed several tutorials on installed SSL certificates, but still come across this barrier.

    Read the article

  • How can I set the CD audio volume in Linux?

    - by user1296362
    In Windows 7 Control Panel - Sound - Sound Properties window there's an slider for setting CD Audio volume: And it's pretty strange that I can't find corresponding one in generic Linux mixers: alsamixer or amixer. I connected a CD drive to try to set CD audio volume with cdcd (CD Player): $ cdcd setvol 0 Invalid volume It isn't actually an invalid volume, it is because ioctl() call fails. I found that out after searching and changing a bit the source code of this utility (in the libcdaudio): --- cdaudio.c.orig 2004-09-09 06:26:20.000000000 +0600 +++ cdaudio.c 2012-05-30 21:34:34.167915521 +0600 @@ -578,8 +578,10 @@ cdvol_data.CDVOLCTRL_BACK_RIGHT_SELECT = CDAUDIO_MAX_VOLUME; #endif - if(ioctl(cd_desc, CDAUDIO_SET_VOLUME, &cdvol) < 0) - return -1; + if(ioctl(cd_desc, CDAUDIO_SET_VOLUME, &cdvol) < 0) { + printf("*** cd_set_volume: ioctl() returned error\n"); + return -1; + } return 0; } By the way cdcd's get volume command yields rather weird output: Left Right Front 1281734864 32767 Back 0 0 Also I tried aumix: $ aumix -c 0 But all with no success. I read from this manual — http://tldp.org/HOWTO/Alsa-sound-6.html (section 6.2 The mixer) that CD channel can present in amixer output. Maybe some drivers for sound card are missing in my Ubuntu 12.04 LTS installation. Though I don't think it's the case: $ lsmod | grep snd snd_mixer_oss 22602 0 snd_hda_codec_hdmi 32474 1 snd_hda_codec_realtek 223867 1 snd_hda_intel 33773 4 snd_hda_codec 127706 3 snd_hda_codec_hdmi,snd_hda_codec_realtek,snd_hda_intel snd_hwdep 13668 1 snd_hda_codec snd_pcm 97188 3 snd_hda_codec_hdmi,snd_hda_intel,snd_hda_codec snd_seq_midi 13324 0 snd_rawmidi 30748 1 snd_seq_midi snd_seq_midi_event 14899 1 snd_seq_midi snd_seq 61896 2 snd_seq_midi,snd_seq_midi_event snd_timer 29990 2 snd_pcm,snd_seq snd_seq_device 14540 3 snd_seq_midi,snd_rawmidi,snd_seq snd 78855 19 snd_mixer_oss,snd_hda_codec_hdmi,snd_hda_codec_realtek,snd_hda_intel,snd_hda_codec,snd_hwdep ,snd_pcm,snd_rawmidi,snd_seq,snd_timer,snd_seq_device soundcore 15091 1 snd snd_page_alloc 18529 2 snd_hda_intel,snd_pcm All I need is just mute or set to 0 volume level of CD Audio channel, like I did in Windows 7, to get rid of sibilant noise in the speakers.

    Read the article

  • SSD causing 100% CPU usage in Apache/PHP

    - by Tim Reynolds
    I wanted to increase the performance on my development laptop so I added an Intel 320 Series SSD as my primary drive. Everything is amazingly fast, as expected, except Apache/PHP. I develop Magento by using an Ubuntu 10.10 virtual machine. Information: Host OS: Win 7 Professional 64bit Guest OS: Ubuntu 10.10 32bit Processor: i7 Chipset QM55 SSD: Intel 320 Series 160gb 30% full HDD: Hitachi 320gb 50% full (in side bay using an adapter) Laptop: Lenovo T510 Using: Shared folders Apache Version: 2.2.16 PHP Version: 5.3.3-1 APC Version: 3.1.3p1 APC Memory: 128M Using tmpfs for cache, log, session directories in Magento In the VM running on the SSD (VM files and source files are on the same drive) loading a product page in the Admin takes on average 26.2 seconds and uses 100% CPU for nearly the entire time. In the VM running on the old HDD loading the same page takes on average 4.4 seconds. It mostly uses around 40-50% of the CPU while rendering the page. I have read this post: Performance issues when using SSD for a developer notebook (WAMP/LAMP stack)? It says to change some settings in the bios. I have turned any and all power management features off in the bios. I can't for the life of me understand why this would be happening.

    Read the article

  • Cannot start Postgres daemon after installing with Yum

    - by Sean the Bean
    I was trying to install Postgres 9.1.4 on Fedora 17 using Yum. If I do: sudo yum install postgres-libs sudo yum install postgres sudo yum install postgis All the installs appear to complete successfully (i.e., no errors), but I cannot start the Postgres daemon using: service postgresql initdb Like the official Postgres download guide says to do (http://www.postgresql.org/download/linux/redhat/). The error says Unknown operation initdb. RPM tells me that it installed psql to /usr/bin/, which I confirmed. It turns out that only a few components installed correctly (psql, pg_dump, pg_configure, and a few others), but most are missing (e.g., pg_ctl and postgres). I've tried several different configurations and had several of my coworkers (with more linux experience than me) look at it, but so far nothing has worked. Two of them have also run into similar issues installing Postgres using apt-get on Ubuntu, which makes me think the rpm isn't doing its job. It seems the only solution to build it from source, which is more robust anyway, but of course it takes longer. I'm wondering, though, if anyone else has run into this issue and/or has successfully installed Postgres on either Fedora or Ubuntu using a package manager like yum or apt-get? Is the rpm broken?

    Read the article

  • DPM 2007 clashing with existing SQL backup job

    - by Paul D'Ambra
    I've recently installed a DPM2007 server on Server 2003 and have set up a protection group against a server 2003 server running SQL 2005 SP3. The SQL server in question has a full backup (as a sql agent job) once a day and transaction log backups hourly. These are zipped up and FTP'd to a server offsite by a scheduled task. Since adding the DPM job I'm receiving many error messages: DPM tried to do a SQL log backup, either as part of a backup job or a recovery to latest point in time job. The SQL log backup job has detected a discontinuity in the SQL log chain for database SERVER_NAME\DB_Name since the last backup. All incremental backup jobs will fail until an express full backup runs. My google-fu suggests that I need to change the full backup my sqlagent job is running to a copy_only job. But I think this means that I can't use that backup with the transaction_logs to restore the database if the building (including the DPM server) burns down. I'm sure I'm missing something obvious and thought I'd see what the hivemind suggests. It is an option to set-up a co-located DPM server elsewhere and have DPM stream the backup but that's obviously more expensive than the current set up. Many thanks in advance

    Read the article

  • Hardware issue: computer plays beep sounds at startup, and then turns off

    - by Darkkurama
    Turns out that after moving, my computer got damaged somehow. I installed my computer 2 days ago. I didn't check the state of the hardware parts and turned the pc on recklessly, not realizing that the cpu fan was dettached from the cpu. After this I reattached the fan. The computer worked kind of faulty after that, because at the time of startup, it would play a long beep (different from the normal beep) and won't display anything. I tried restarting after this incident and it got fixed (don't know how, but it did), and I was even able to play some videogames for some hours (more than 5). This morning I tried to get this issue sorted out by checking again the cpu and in general, the connections, and for my surprise, at the time of turning it on again, the computer wouldn't boot at all. I reapplied some thermal paste to the cpu and fan, but got no results (I did this because the old paste was very badly distributted along the surface of both cpu and fan). Now every time I turn it on, the computer acts randomly: sometimes at startup, it plays a long continous beep and then turns off. one time it played a long continous beep and got to the windows password input screen, but then it turned off More frequently, I turn it on and after some seconds, it turns off without any sound. You can check this video I recorded, which features how the computer behaves most of the time Another one I tried troubleshoting it myself by disconnecting my graphic card, rams, HDD and even the cpu and its fan with no result. Computer specs: Intel Core 2 Quad Q6600 processor, Nvidia xfx gtx 260 black edition, 4 GB ram, 500 GB hard drive, Windows 7 64 bit

    Read the article

  • Can I recover a rm -rf-ed Mercurial repository?

    - by WishCow
    I made the mistake of wiping out my entire project directory with a quick "rm -rf project". Of course, the .hg directory went with it. I had about 15-20 changesets, that I have not pushed to anyone, and I would really really like to get those back. The system is a Ubuntu machine, and the partiton where the delete happened is ext3, the project consist mostly of PHP files. I know about the guideline to not write to the disk in question. The first idea was to use the tool named scalpel, to get the PHP files back and diff them with the current version from the repo, and somehow carve the changes out. While it succeeded, it did not recover the file names (or there is a switch I'm missing), so I'm left with a few thousand sequentially named .php files, combing through them is not an option. Can a kind soul please save me, and suggest a way to: a) get the repo back, or b) get the files back, with filenames For those wondering how I did such a stupid thing: I was working on a file in Vim which I wanted to remove from the repository: :!hg rm % This complained that the file is in a subrepository, so I specified the following: :!hg rm % -R engine which complained that file has modifications, use -f to force. And this is when somehow, I made up the following command: :!rm -rf % -R engine Somehow, seeing "force" makes me do a rm -rf by reflex.

    Read the article

  • How to make network drives appear even if disconnected?

    - by Jake
    I have the same problem as many others: network and home drives set by group policy and AD are not connected on windows startup. The prime suspect is that the LAN or wireless does not connect until after user log in. I have already given up on that. Now, I just want the disconnected drives to continue to list in My Computer so that if the user goes in and double click the drive, it will connect again. However, on some machines the drive is completely missing from My Computer. If I right click My Computer Map Network Drive again, it does work. But it's very troublesome to do it all the time. And I don't want to use a script to map the drives because I don't want to appear to be using a hacky solution to the users. The drives listed as disconnected will look more like a "built-in feature", and gives users more confidence. How can I keep the disconnected drives in My Computer? I am using Windows 7 Professional and Win2k8.

    Read the article

  • Will these instructions work when turning of journaling on a n ext4 SSD?

    - by snowlord
    I have an Acer Aspire One with an SSD for storage. I recently installed Ubuntu on it and chose ext4 for my filesystem. Then I read that journaling on an SSD isn't the best idea, so I will try to disable journaling and I have found these intstructions (from http://fenidik.blogspot.com/2010/03/ext4-disable-journal.html): # Create ext4 fs on /dev/sda10 disk mkfs.ext4 /dev/sda10 # Enable writeback mode. This mode will typically provide the best ext4 performance. tune2fs -o journal_data_writeback /dev/sda10 # Delete has_journal option tune2fs -O ^has_journal /dev/sda10 # Required fsck e2fsck -f /dev/sda10 # Check fs options dumpe2fs /dev/sda10 |more For more performance add fstab opions: data=writeback,noatime,nodiratime i.e: /dev/sda10 /opt ext4 defaults,data=writeback,noatime,nodiratime 0 0 I will use them on my boot partition. Are there any particularly bad parts here, or are there any missing steps? Will my boot partition be fit for being on an SSD after this? Or should I consider switching to ext2, or even reinstall it all and choose ext2 at partitioning time (I'd rather not though, since I've configured quite some stuff already)?

    Read the article

  • backup of TFS with DPM 2007 - different backup times

    - by user46516
    I have DPM set up to back up my TFS server every 30 minutes, the reason being it's a way better interface than the quirky SQL backup interface. I also do a full backup nightly using an SQL maintenance job. My thinking is I would use DPM to restore my databases in case of losing my database and the nightly full backup would be a "just in case" the DPM restore doesn't work. I was thinking a little harder about this set up today and started to think about the fact that the DPM backup of the individual databases happens at different 30 min windows.. i.e. one happens at 13h30, another at 13h34 etc. Would this difference in time be a problem when it comes to restoring the TFS server? If I restore the databases and they are from different times, will this create corruption with pointers in one database pointing to missing items in the other database.. do the databases even rely on each other or are they completely interdependant. Lastly, how would SQL (log) backup cope with this?

    Read the article

  • How can I disable DNSSC for Google Apps (GMail) MX records on my authoritative domains?

    - by meinemitternacht
    I'm running a BIND Master / Slave setup with DNSSEC, but some of my domains use Google Apps for e-mail services. Google doesn't support DNSSEC and BIND doesn't like it at all. Log output: Sep 6 17:12:51 srv549 named[5376]: error (broken trust chain) resolving 'ALT2.ASPMX.L.GOOGLE.COM.dlv.isc.org/DLV/IN': 70.32.45.42#53 Sep 6 17:12:51 srv549 named[5376]: error (broken trust chain) resolving 'ALT2.ASPMX.L.GOOGLE.COM/A/IN': 70.32.45.42#53 Sep 6 17:12:51 srv549 named[5376]: error (broken trust chain) resolving 'ALT2.ASPMX.L.GOOGLE.COM/AAAA/IN': 70.32.45.42#53 Sep 6 17:12:51 srv549 named[5376]: validating @0x7f755cb83950: ALT2.ASPMX.L.GOOGLE.COM AAAA: bad cache hit (ALT2.ASPMX.L.GOOGLE.COM.dlv.isc.org/DLV) Sep 6 17:12:51 srv549 named[5376]: error (broken trust chain) resolving 'ALT2.ASPMX.L.GOOGLE.COM/AAAA/IN': 69.147.224.178#53 Sep 6 17:12:51 srv549 named[5376]: validating @0x7f755ca52c30: ALT2.ASPMX.L.GOOGLE.COM A: bad cache hit (ALT2.ASPMX.L.GOOGLE.COM.dlv.isc.org/DLV) Sep 6 17:12:51 srv549 named[5376]: error (broken trust chain) resolving 'ALT2.ASPMX.L.GOOGLE.COM/A/IN': 69.147.224.178#53 Sep 6 17:12:51 srv549 named[5376]: validating @0x7f755ca52c30: ASPMX2.GOOGLEMAIL.COM AAAA: bad cache hit (ASPMX2.GOOGLEMAIL.COM.dlv.isc.org/DLV) Sep 6 17:12:51 srv549 named[5376]: error (broken trust chain) resolving 'ASPMX2.GOOGLEMAIL.COM/AAAA/IN': 70.32.45.42#53 Sep 6 17:12:51 srv549 named[5376]: validating @0x7f755cb83950: ASPMX2.GOOGLEMAIL.COM A: bad cache hit (ASPMX2.GOOGLEMAIL.COM.dlv.isc.org/DLV) Sep 6 17:12:51 srv549 named[5376]: error (broken trust chain) resolving 'ASPMX2.GOOGLEMAIL.COM/A/IN': 70.32.45.42#53 Sep 6 17:12:51 srv549 named[5376]: validating @0x7f754c1b0bd0: ASPMX2.GOOGLEMAIL.COM A: bad cache hit (ASPMX2.GOOGLEMAIL.COM.dlv.isc.org/DLV) Sep 6 17:12:51 srv549 named[5376]: error (broken trust chain) resolving 'ASPMX2.GOOGLEMAIL.COM/A/IN': 70.32.45.42#53 Sep 6 17:12:51 srv549 named[5376]: validating @0x7f754c1a6a30: ASPMX2.GOOGLEMAIL.COM AAAA: bad cache hit (ASPMX2.GOOGLEMAIL.COM.dlv.isc.org/DLV) Sep 6 17:12:51 srv549 named[5376]: error (broken trust chain) resolving 'ASPMX2.GOOGLEMAIL.COM/AAAA/IN': 70.32.45.42#53 Sep 6 17:12:51 srv549 named[5376]: validating @0x7f755cb83950: ASPMX3.GOOGLEMAIL.COM AAAA: bad cache hit (ASPMX3.GOOGLEMAIL.COM.dlv.isc.org/DLV) I'm not absolutely sure this is stopping Google Apps from working, because I just enabled all of the DNSSEC features. Does anyone here have experience with this?

    Read the article

  • Very long (>300s) request processing time on Apache Server serving static content from particular IP

    - by Ron Bieber
    We are running an Apache 2.2 server for a very large web site. Over the past few months we have been having some users reporting slow response times, while others (including our resources, both on the internal network and our home networks) do not see any degradation in performance. After a ton of investigation, we finally found a "Deny from none" statement in our configuration that was causing reverse DNS lookups (which were timing out) that solved the bulk of our issues, but we still have some customers that we are seeing in the Apache logs (using %D in the log format) with request processing times of 300s for images, css, javascript and other static content. We've checked all Deny / Allow statements for reoccurrence of "none", as well as all other things we know of that would cause reverse DNS lookups (such as using "REMOTE_HOST" in rewrite rules, using %a instead of %h in our log format configuration) as well as verified that HostnameLookups is set to "Off". As an aside, we've also validated that reverse DNS lookups for folks having this problem do not time out - so I'm fairly certain DNS is not an issue in this case. I've run out of ideas. Are there any Apache configuration scenarios that someone can point me to that I might be missing that would cause request times for static content to take so long only for certain users? Thank you in advance.

    Read the article

  • Sandra reports my CPU as "Engineering Sample", how can I be sure this is correct?

    - by stevenvh
    I ran SiSoftware's Sandra on my new PC, and for my CPU it reports: Generation : G8 / T29 Name : TN0 (Trinity) FX/Opteron 32nm (ES) Revision/Stepping : 0 : 10 / 1 Stepping Mask : TN-A1 Microcode : MU6F10010F The (ES) is a well-known code in product development, meaning "Engineering Sample". Those are beta versions of the CPU, which still may contain some bugs, or even have features switches off. I contacted both the PC's manufacturer Medion as well as AMD about this. I had to downvote the Medion helpdesk here. The person I talked to boldly said Sandra was wrong (without knowing how Sandra got this information; he didn't even know the software), and used the word "impossible". His conclusion was "We’re not taking this in consideration for service”. Right. So, if you like Medion for their good prices, but like good support even better, you may consider buying your PC elsewhere. AMD was more helpful, but wanted to be sure before replacing the part (which I find reasonable). They suggested that I dismount the cooler from the CPU to check what was printed on it to be sure. I'm a bit reluctant here: I would have to wipe the thermal paste from the CPU, and won't know for sure my cooling will still be OK afterwards. Questions Has anybody actually found a confirmed ES CPU in her PC? Is anybody aware of Sandra erroneously reporting CPUs as Engineering Samples? How can you tell an ES, apart from the print on the package? Shouldn't Stepping Mask identify the CPU uniquely?

    Read the article

  • How to best tune my SAN/Initiators for best performance?

    - by Disco
    Recent owner of a Dell PowerVault MD3600i i'm experiencing some weird results. I have a dedicated 24x 10GbE Switch (PowerConnect 8024), setup to jumbo frames 9K. The MD3600 has 2 RAID controllers, each has 2x 10GbE ethernet nics. There's nothing else on the switch; one VLAN for SAN traffic. Here's my multipath.conf defaults { udev_dir /dev polling_interval 5 selector "round-robin 0" path_grouping_policy multibus getuid_callout "/sbin/scsi_id -g -u -s /block/%n" prio_callout none path_checker readsector0 rr_min_io 100 max_fds 8192 rr_weight priorities failback immediate no_path_retry fail user_friendly_names yes # prio rdac } blacklist { device { vendor "*" product "Universal Xport" } # devnode "^sd[a-z]" } devices { device { vendor "DELL" product "MD36xxi" path_grouping_policy group_by_prio prio rdac # polling_interval 5 path_checker rdac path_selector "round-robin 0" hardware_handler "1 rdac" failback immediate features "2 pg_init_retries 50" no_path_retry 30 rr_min_io 100 prio_callout "/sbin/mpath_prio_rdac /dev/%n" } } And iscsid.conf : node.startup = automatic node.session.timeo.replacement_timeout = 15 node.conn[0].timeo.login_timeout = 15 node.conn[0].timeo.logout_timeout = 15 node.conn[0].timeo.noop_out_interval = 5 node.conn[0].timeo.noop_out_timeout = 10 node.session.iscsi.InitialR2T = No node.session.iscsi.ImmediateData = Yes node.session.iscsi.FirstBurstLength = 262144 node.session.iscsi.MaxBurstLength = 16776192 node.conn[0].iscsi.MaxRecvDataSegmentLength = 262144 After my tests; i can barely come to 200 Mb/s read/write. Should I expect more than that ? Providing it has dual 10 GbE my thoughts where to come around the 400 Mb/s. Any ideas ? Guidelines ? Troubleshooting tips ?

    Read the article

  • Almost every Inkscape extension yields an error in Mac OS X

    - by andyvn22
    I've run the latest few versions of Inkscape (currently landed on "0.47+devel"), and have been having trouble with the Extensions menu. So far, in every version of Inkscape I've tried, nearly every extension yields the following error: The fantastic lxml wrapper for libxml2 is required by inkex.py and therefore this extension. Please download and install the latest version from http://cheeseshop.python.org/pypi/lxml/, or install it through your package manager by a command like: sudo apt-get install python-lxml I've tried the instructions listed there, of course, with no effect. I've also found many references to this issue on fora, in bug trackers, etc., and as such also tried: sudo easy_install lxml cd /Applications/Inkscape.app/Contents/Resources/lib mv libxml2.2.dylib libxml2.2.dylib.old ln -s /usr/lib/libxml2.dylib and a few similar solutions. Nothing has produced any change in Inkscape's behavior. Does anyone know A) what's really going on here? Because from what I gather the error is not describing the actual problem. And of course B) a simple solution? I need those features! :)

    Read the article

  • FTP not listing directory NcFTP PASV

    - by Jacob Talbot
    I am attempting to setup Multicraft on my server, all is running smoothly however the FTP won't allow anyone to connect from a remote FTP client, where net2ftp will work smoothly from a remote location. I have included the transcript from my FTP client, Transmit below to give you an idea of what's going on. I have disabled iptables as well, and still no luck either way. Transmit 4.1.7 (x86_64) Session Transcript [Version 10.8.2 (Build 12C54)] (21/10/12 11:23 PM) LibNcFTP 3.2.3 (July 23, 2009) compiled for UNIX 220: Multicraft 1.7.1 FTP server Connected to ateam.bn-mc.net. Cmd: USER jacob.9 331: Username ok, send password. Cmd: PASS xxxxxxxx 230: Login successful Cmd: TYPE A 200: Type set to: ASCII. Logged in to ateam.bn-mc.net as jacob.9. Cmd: SYST 215: UNIX Type: L8 Cmd: FEAT 211: Features supported: EPRT EPSV MDTM MLSD MLST type*;perm*;size*;modify*;unique*;unix.mode;unix.uid;unix.gid; REST STREAM SIZE TVFS UTF8 End FEAT. Cmd: OPTS UTF8 ON 200: OK Cmd: PWD 257: "/" is the current directory. Cmd: PASV Could not read reply from control connection -- timed out. (SReadline 1)

    Read the article

  • Use Apache authentication to Segregate access to Subversion subdirectories

    - by Stefan Lasiewski
    I've inherited a Subversion repository, running on FreeBSD and using Apache2.2 . Currently, we have one project, which looks like this. We use both local files and LDAP for authentication. <Location /> DAV svn SVNParentPath /var/svn AuthName "Staff only" AuthType Basic # Authentication through Local file (mod_authn_file), then LDAP (mod_authnz_ldap) AuthBasicProvider file ldap # Allow some automated programs to check content into the repo # mod_authn_file AuthUserFile /usr/local/etc/apache22/htpasswd Require user robotA robotB # Allow any staff to access the repo # mod_authnz_ldap Require ldap-group cn=staff,ou=PosixGroup,ou=foo,ou=Host,o=ldapsvc,dc=example,dc=com </Location> We would like to allow customers to access to certain subdirectories, without giving them global access to the entire repository. We would prefer to do this without migrating these sub-directories to their own repositories. Staff also need access to these subdirectories. Here's what I tried: <Location /www.customerA.com> DAV svn SVNParentPath /var/svn # mod_authn_file AuthType Basic AuthBasicProvider file AuthUserFile /usr/local/etc/apache22/htpasswd-customerA Require user customerA </Location> <Location /www.customerB.com> DAV svn SVNParentPath /var/svn # mod_authn_file AuthType Basic AuthBasicProvider file AuthUserFile /usr/local/etc/apache22/htpasswd-customerB Require user customerB </Location> I've tried the above. Access to '/' works for staff. However, access to /www.customerA.com and /www.customerB.com does not work. It looks like Apache is trying to authenticate the 'customerB' against LDAP, and doesn't try local password file. The error is: [Mon May 03 15:27:45 2010] [warn] [client 192.168.8.13] [1595] auth_ldap authenticate: user stefantest authentication failed; URI /www.customerB.com [User not found][No such object] [Mon May 03 15:27:45 2010] [error] [client 192.168.8.13] user stefantest not found: /www.customerB.com What am I missing?

    Read the article

  • Warning messages while build Apache server

    - by GoinOff
    I am building Apache server 2.4.6 from source and am not sure about a few warning messages I received during the rpm build process. The build completes OK and everything seems fine..BTW, this is on CentOS 5.5... During the make process: /home/johnm/dev/project1/install/linux/BUILD/httpd-2.4.6/srclib/apr/libtool --silent --mode=install install mod_authn_file.la /home/johnm/dev/project1/install/linux/tmp/usr/local/apache2/modules/ libtool: install: warning: remember to run `libtool --finish /usr/local/apache2/modules' What is this warning message about?? remember to run libtool --finish ?? Also, I see this: libtool: install: warning: `/home/johnm/dev/project1/install/linux/BUILD/httpd-2.4.6/srclib/apr-util/libaprutil-1.la' has not been installed in `/usr/local/apache2/lib' I am building Apache in a temp directory but libtools seems to be looking in the wrong place (/usr/local/apache2/lib instead of /home/johnm/dev/project1/install/linux/tmp/usr/local/apache2/lib). This seems like something I can blow off?? In my specfile I set DESTDIR to /home/johnm/dev/project1/install/linux/tmp where the install files are placed: %install export DESTDIR=%{buildroot} make install Both messages appear numerous times during the make process. When I install the rpm on the system, everything appears to work without problems..Thinking I can ignore these messages??? or am I missing something important??

    Read the article

  • How do I redirect my website from non-www to WWW using Apache2?

    - by Andrew
    I'm currently trying to set up my personal webpage. I am using a VPS and have manually installed Wordpress, and everything seems to work... except if I go to the non-www version of my website, it comes up with a page not found. www.andrewrockefeller.com <-- Works andrewrockefeller.com <-- Does not (and I want to redirect it to www.andrewrockefeller.com) I have tried adding RewriteEngine functionality to my .htaccess, and that isn't working. I have also tried adding the 'most-voted' method of adding to my default file (which apache2.conf pulls from: <VirtualHost *> ServerName andrewrockefeller.com Redirect 301 / http://www.andrewrockefeller.com/ </VirtualHost> Seeing how many people are able to get the above working, is there something else I may be missing to allow that to function? Thank you for your time! EDIT: My .htaccess file is as follows: # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> # END WordPress The #Wordpress section was autocreated when I changed the settings from ?p=1 (ugly links) to prettylinks. Any proposed solutions I've found on here I've tried out and restarted apache2, and it hasn't worked.

    Read the article

  • MAMP Pro .xip.io, fixing urls with htaccess

    - by user3540018
    I've got all my websites set up with MAMP Pro. For instance, I got it set up, so when I go to example.com, the browser displays the website that's set up on my iMac. Now, I wanna get MAMP Pro to work so I can view my website on my other computers/devices (which are all hooked up on the same network.) So far all I had to do is check the checkbox "via Xip.io (LAN only)", and now I can view my website on my other computers/devices within my LAN by simply going to example.com.10.0.1.13.xip.io. Problem is, whenever I'm on this other computer/device, when I click on the links, I get 404 error. ie. whenever I go to example.com/news, I get the 404. But when I go to example.com.10.0.1.13.xip.io/news, THEN I get the right page. So in order to solve my problem I need to rewrite the urls. So whenever someone clicks on a link ie. goes straight to example.com/news, he'll go to example.com.10.0.1.13.xip.io/news. I don't want to change all the links in my MySQL file, but I believe I can do it simply with the htaccess file. I've opened the htaccess file and added the last two lines, but it just doesn't work. <IfModule mod_rewrite.c> RewriteEngine On # Send would-be 404 requests to Craft RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_URI} !^/(favicon\.ico|apple-touch-icon.*\.png)$ [NC] RewriteRule (.+) index.php?p=$1 [QSA,L] RewriteCond %{HTTP_HOST} ^example\.com RewriteRule ^(.*)$ http://www.example.com.10.0.1.13.xip.io/$1 [R=permanent,L] </IfModule> Or perhaps I don't need to change the htaccess file, is there something that I could be missing in the MAMP Pro settings, or perhaps a MAMP extension that I need?

    Read the article

  • How can I make Internet Explorer 6 render Web pages like Internet Explorer 11?

    - by gparyani
    Now, I know that this may seem like a bad question in that I can just upgrade to Internet Explorer 8, but I am sticking with IE6 in that IE8 removes valuable features, like the ability to save favorites offline and the fact that a file path turns into a Windows Explorer window and typing a Web address into Windows Explorer changes it into an IE window. I know that Internet Explorer 6 does a really bad job at rendering some pages. I know of the Google Chrome Frame extension that brings Chrome-style rendering into IE, but that will soon be discontinued. So, I tried another thing: I know that C:\Windows\System32\mshtml.dll contains the Trident rendering engine that is used by IE, so I tried something: I first backed up the original file by renaming it on Windows XP to mshtml-old.dll, then I tried to copy in the DLL from a computer running Windows 7 with Internet Explorer 10. I noticed that, after copying, the system had replaced the new DLL with the old one, but left the one I backed up intact. Is there any way I can get the system to not replace the DLL like that so that I can transfer in IE11's mshtml.dll into Windows XP and make IE6 render like IE11? I'm looking for an answer that describes how to tweak my system to make IE6 render like IE11 (or IE10), not one that tells me to upgrade IE or install another browser. I don't care how tedious the method is, just as long as it works.

    Read the article

  • Explorer.exe keeps crashing during log in

    - by asif
    I have got a weird problem. My windows 7 has two user accounts (both are administrator). I can log in to one account and do all sort of work. But whenever I try to log in to other account, it shows a blank screen and a messagebox pops up with "windows explorer has stopped working". The options available are: Close the program Check online for a solution and close the program The problem signature is as follows: Problem Event Name: InPageError Error Status Code: c000009c Faulting Media Type: 00000003 OS Version: 6.1.7601.2.1.0.256.1 Locale ID: 1033 Additional Information 1: 0a9e Additional Information 2: 0a9e372d3b4ad19135b953a78882e789 Additional Information 3: 0a9e Additional Information 4: 0a9e372d3b4ad19135b953a78882e789 If I press alt+ctrl+del and then select start task manager, it also crashes. I can not run any program using runas command (from good profile) too. The task manager and runas programs all show same problem signature. I read the similar question and followed all the steps, but no luck. Later, I viewed the event log and found that, explorer.exe could not access a file. I checked the location but the file is there. The actual message is: Windows cannot access the file C:\Users\testuser\AppData\Local\Microsoft\Windows\Caches\{AFBF9F1A-8EE8-4C77-AF34-C647E37CA0D9}.1.ver0x0000000000000020.db for one of the following reasons: there is a problem with the network connection, the disk that the file is stored on, or the storage drivers installed on this computer; or the disk is missing. Windows closed the program Windows Explorer because of this error. The question is, how can I resolve this issue? Should I just delete the file or replace it with another one to stop explorer.exe from crashing? offtopic: What is the content of this file and why it is necessary?

    Read the article

  • Postfix "loops back to myself" error on relay to another IP address on same machine

    - by Nic Wolff
    I'm trying to relay all mail for one domain "ourdomain.tld" from Postfix running on port 2525 of one interface to another SMTP server running on port 25 of another interface on the same machine. However, when a message is received for that domain, we're getting a "mail for loops back to myself" error. Below are netstat and postconf, the contents of our /etc/postfix/transport file, and the error that Postfix is logging. (The high bytes of each IP address are XXXed out.) Am I missing something obvious? Thanks - # netstat -ln -A inet Proto Recv-Q Send-Q Local Address Foreign Address State ... tcp 0 0 XXX.XXX.138.209:25 0.0.0.0:* LISTEN tcp 0 0 XXX.XXX.138.210:2525 0.0.0.0:* LISTEN # postconf -d | grep mail_version mail_version = 2.8.4 # postconf -n alias_maps = hash:/etc/aliases allow_mail_to_commands = alias,forward bounce_queue_lifetime = 0 command_directory = /usr/sbin config_directory = /etc/postfix daemon_directory = /usr/libexec/postfix data_directory = /var/lib/postfix debug_peer_level = 2 default_privs = nobody default_process_limit = 200 html_directory = no inet_interfaces = XXX.XXX.138.210 local_recipient_maps = local_transport = error:local mail delivery is disabled mail_owner = postfix mailbox_size_limit = 0 mailq_path = /usr/bin/mailq manpage_directory = /usr/local/man message_size_limit = 10240000 mydestination = mydomain = ourdomain.tld myhostname = ourdomain.tld mynetworks = XXX.XXX.119.0/24, XXX.XXX.138.0/24, XXX.XXX.136.128/25 myorigin = ourdomain.tld newaliases_path = /usr/bin/newaliases queue_directory = /var/spool/postfix readme_directory = /etc/postfix recipient_delimiter = + relay_domains = ourdomain.tld relay_recipient_maps = sample_directory = /etc/postfix sendmail_path = /usr/sbin/sendmail setgid_group = postdrop smtpd_authorized_verp_clients = $mynetworks smtpd_recipient_limit = 10000 transport_maps = hash:/etc/postfix/transport unknown_local_recipient_reject_code = 450 # cat /etc/postfix/transport ourdomain.tld relay:[XXX.XXX.138.209]:25 # tail -f /var/log/maillog ... Aug 2 23:58:36 va4 postfix/smtp[9846]: 9858A758404: to=<nicwolff@... >, relay=XXX.XXX.138.209[XXX.XXX.138.209]:25, delay=1.1, delays=0.08/0.01/1/0, dsn=5.4.6, status=bounced (mail for [XXX.XXX.138.209]:25 loops back to myself)

    Read the article

< Previous Page | 680 681 682 683 684 685 686 687 688 689 690 691  | Next Page >