Search Results

Search found 12330 results on 494 pages for 'old retired dude'.

Page 379/494 | < Previous Page | 375 376 377 378 379 380 381 382 383 384 385 386  | Next Page >

  • Moving a lotus database causes incoming mail failure

    - by Logman
    Hey all, I moved a couple lotus note databases to another hd (ran out of space) by using a directory link/pointer on the domino server. USed a text file with the .DIR ext with the desired new path etc... well that works fine. I open up a lotus notes client, use the id and then OPEN the database. The directory link works fine, the new location of the db is opened with no problems. We found out that we couldnt FORWARD any mail: We did the following to make it work: Goto menu FILE|MOBILE|EDIT CURRENT LOCATION Goto tab MAIL and enter the correct path for "Mail File:" example: it should read "mail\morespace\flabor" and not "mail\flabor" "morespace" = directory link/pointer we can forward and reply to emails fine after that fix. The problem is that the user has no incoming email. Domino is still trying to send the incoming mail of that user to its old db location "mail\flabor" instead of "mail\morespace\flabor". Delivery error saying user does not exist. Is this a cache problem? We have reset the server ("Q" at the prompt), though we have not completely shut it down though. Thanks Frank

    Read the article

  • GitLab post-receive hook not firing

    - by Ben Graham
    Apologies if this isn't the right stackexchange. I have a GitLab install. It was installed over the top of a gitolite install that was only a few days old, and I assume this non-standard setup is at the root of my problem, but I cannot pin it down. The problem is straightforward: post-receive hooks are not fired. This prevents 'project activity' appearing in GitLab. The problem looks like: $ git push #... error: cannot run hooks/post-receive: No such file or directory Hook Exists The post-receive hook/symlink exists and is executable: -rwxr-xr-x 1 git git 470 Oct 3 2012 .gitolite/hooks/common/post-receive lrwxrwxrwx 1 git git 45 Oct 3 2012 repositories/project.git/hooks/post-receive -> /home/git/.gitolite/hooks/common/post-receive It's Executable By GitLab The gitlab user can execute the script (I have removed the /dev/null redirect and fed in blank input to get an 'OK' as output): sudo su - gitlab -c /home/git/.gitolite/hooks/common/post-receive OK GitLab Can Find It GitLab is looking for hooks in the correct location: $ grep hooks /srv/gitlab/gitlab/config/gitlab.yml hooks_path: /home/git/.gitolite/hooks/ and $ bundle exec rake gitlab:app:status RAILS_ENV=production # ... /home/git/.gitolite/hooks/common/post-receive exists? ............YES Environment The env -i line in the hook is commonly cited as an issue. I think that would occur after this problem, but for completeness, redis-cli is found OK: $ env -i redis-cli redis> I've run out of debugging ideas on this one. Does anybody have any suggestions?

    Read the article

  • ASP.NET Session State SQL Server 2008 R2 Freezes with High CPU Usage

    - by jtseng
    Our ASP.Net website uses SQL Server as the session state provider. We currently host the database on SQL Server 2005 since it does not play well on 2008 R2. We would like to know why, and how to fix it. hardware setup Our current session state server has SQL Server 2005 with the files hosted on a single local disk. It is one of our oldest servers since it has served us well, and we never felt the need to upgrade it. The database is about 2 GB holding 6000 sessions. (The sessions are a little big, but we need it.) We have another server with SQL Server 2008 R2 with a much faster CPU, much more RAM, and a much faster hard disk. situation One day, we have a huge surge in traffic. The transaction log growth on SQL Server freezes the server for 10's of seconds, allowing only a few requests through in minutes. So we load up the new server with ASPState with very large data and log files and point all of our applications to the new server. It chugs along fine for about 5 minutes, and then the CPU usage jumps up to 50% of the 16 cores that Standard Edition can use and freezes for 10's of seconds at a time. The files do not record any autogrowth events. The disk queue is nice and low. RAM usage is low. CPU usage on our old server has never been higher than 5%. What happened on the new server? Alternatively, I would like to hear success stories with ASP.NET session state server running on SQL Server 2008 R2 with an average write load of 30MB/sec with bursts up to 200MB/sec.

    Read the article

  • Cannot load from raid with grub

    - by Andrew Answer
    I have a RAID1 array on my Ubuntu 12.04 LTS and my /sda HDD has been replaced several days ago. I use this commands to replace: # go to superuser sudo bash # see RAID state mdadm -Q -D /dev/md0 # State should be "clean, degraded" # remove broken disk from RAID mdadm /dev/md0 --fail /dev/sda1 mdadm /dev/md0 --remove /dev/sda1 # see partitions fdisk -l # shutdown computer shutdown now # physically replace old disk by new # start system again # see partitions fdisk -l # copy partitions from sdb to sda sfdisk -d /dev/sdb | sfdisk /dev/sda # recreate id for sda sfdisk --change-id /dev/sda 1 fd # add sda1 to RAID mdadm /dev/md0 --add /dev/sda1 # see RAID state mdadm -Q -D /dev/md0 # State should be "clean, degraded, recovering" # to see status you can use cat /proc/mdstat After bebuilding completion "fdisk -l" says what I have not valid partition table /dev/md0. So 1) "update-grub" find only /sda and /sdb Linux, not /md0 2) "dpkg-reconfigure grub-pc" says "GRUB failed to install the following devices /dev/md0" I cannot load my system except from /sdb1 and /sda1, but in DEGRADED mode... This is my partial fdisk -l output: Disk /dev/sdb: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000667ca Device Boot Start End Blocks Id System /dev/sdb1 * 63 940910984 470455461 fd Linux raid autodetect /dev/sdb2 940910985 976768064 17928540 5 Extended /dev/sdb5 940911048 976768064 17928508+ 82 Linux swap / Solaris Disk /dev/md0: 481.7 GB, 481746288640 bytes 2 heads, 4 sectors/track, 117613840 cylinders, total 940910720 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/md0 doesn't contain a valid partition table Anybody can resolve this issue? I have big headache with this.

    Read the article

  • Notebook Operating System with extreme support cycles/security updates

    - by leto
    Hello there, after reading the announcements about Mac OS X "Lion" and Apples political decision, I've had enough. I'm a longtime Apple User since 1992, have always felt at home there, but am trying to switch to alternative Operating System since a year. I've also been working with Unix machines since 2001, so I'm looking in one of the free Unices or a Linux. Since I last looked at the desktop in 2002 choke much has changed, it seems. So I'm lost once more in the war between desktop environments and software. To be honest: I don't care what it's name is, I want to get my job done. Here's what I set me as landmark for an operating system/software to be considered: Has to be atleast four years old Has to supply security updates for current release for atleast a year Production quality stability for the whole desktop environment (!) No f****g commercial stuff that tends to supply me with privacy invading App Store or Cloud space So far I'm running a MacBook from 2007, 4 Gig memory, 250 Gig disk and I need: IMAPs for Mail since 1995 Webbrowser sic Shell Keeping current with Updates/Upgrades with no more than 5 Minutes spent in entering commands (makes it hard for OpenBSD ;-) ) A desktop filemanger would be nice, but is a bonus. What can you suggest as operating system? The one with the longest support cycles and best chance to survive the next 10 years will win a new user, even sending patches when needed :-) Greets

    Read the article

  • DV-AVI in DivX Plus Player and VirtualDub - playback issue

    - by user714965
    I have an AVI-video which I had (digitally) transferred from a DV camera to my PC. The video contains errors at the beginning most likely because the DV tape was pretty old. When playing this video in the DivX Plus Player I get some picture artefacts and some high noise peaks of the sound. These stops after two seconds. When I'm playing this video in VirtualDub (where I want to cut it) I get the same picture artefacts. But the sound errors (those loud high peaks) lasts ten seconds. These sound errors are also contained in the cutted video. Why does the video have more errors when played in VirtualDub? I think because of different codecs which are used for decoding the video? How can I change the codec which VirtualDub uses for decoding? I have installed ffdshow for this but it seems that it is not used because I don't get the ffdshow icon in the taskbar when playing the video in VirtualDub. When playing in DivX Player Plus I get this icon.

    Read the article

  • How to Re-install an App that Shows up in the Appstore as 'Update' Instead of 'Buy App'

    - by Craig Reville
    So long story short: I dropped the wrong app into 'clean my mac' and I hit 'cancel' but it was too late by that point. I rebooted and appstore said it had an update, when I opened appstore it was showing an update for the app I just uninstalled. I tried clicking 'update' but it gives me an error saying it's unable to install after 'downloading'. When I try to go into 'purchased apps' it shows the app as uninstalled so I click 'install' and I get an error saying it's already installed. I'm running Lion OS X, latest version, updated, mac book pro is only a few months old. I tried searching through the entire system to remove all traces of the app, after rebooting appstore no longer shows the app and no longer shows the update but on the apps page it still says 'Update'. I tried reinstalling the app from desktop OUT of the appstore and again says the app is 'already installed'. So after reading more about lion I found an article that spoke about 'BundleID' being the thing that tells appstore what's installed and needing updating however I can't find the location of where the BundleID would be. Any thoughts? I've tried CCleaner, AppCleaner etc and none of them show the app, mainly because it is uninstalled. Update I've spoken to Apple Support who confirmed that there is a file in the system that connects separately to tell the system if there are updates available however they declined to inform me of any further details. Apple also referred me from technical support to iTunes App Store opposed to Mac App Store support and from there I have been referred to AppleCare who are currently 'investigating' this issue. Hopefully there will be a fix that's simple to implement for people having similar issues, this appears to be a more common issue than I previously thought.

    Read the article

  • pfsense, active directory, local domain

    - by Dalton Conley
    First things first, I have no idea what I'm doing. Certainly not afraid to admit that.. but here is my network setup. I have 2 servers, one of them in a domain controller. Both are running windows server 2008. They have replicated directories. Each server is at a different location and has its own firewall for the network at that location. Both firewalls are using pfsense. Recently a firewall went down and my coworker reinstalled pfsense, and everything seems setup correctly. Again, I have no idea what I'm doing so I'm not sure. I have records from when the previous IT person had setup this network and the firewall settings are the same but those records could have been extremely old. Now, I have a domain name for my network.. we'll call it "mydomain.net". I use to be able to access this domain name and it would bring up the servers replicated drives(i.e. \\mydomain.net). Now I cannot. I can however access the servers individual host names on my network(i.e. \\server1 , \\server2). We didn't change anything on the server which is what makes me think its something to do with the firewall. I know this is probably a very general question and I don't have a lot of detail to add but could anyone give me some insight on to what could be causing this, or some debugging techniques I can apply to this? I'm a programmer, not a network administrator.

    Read the article

  • What is the difference between disabling hibernation and idling time for a NAS?

    - by Gary M. Mugford
    I have two D-LINK DNS-323 NAS boxes with two Seagate drives in each. The first one is about a year old, the second one about three months. The first two on Monster are each 1.5T drives while the last two on Origami are 2T drives. I have never been overly happy with the Monster drives but, outside of poor throughput on small files, they have been consistently available to all programs after I put a batch file into my startup to do a directly listing of each. I added the two new drives when I added the Origami box. But, watching the dos box that comes up, I rarely see both listed before the box disappears. Other programs, backups, Belarc, even my file browsers, seem to have a dickens of a time seeing O: and P:. Finally, I decided to go into setup and turn off hibernation. Performance HAS been better since and Belarc, for instance, now sees both drives. At the time of poking around, I noticed an Idle Time feature too. What is the difference between the two settings? And for added points, how much trouble am I in for turning off hibernation? The super bonus round ... anything ELSE I should have done? Thanks in advance, GM

    Read the article

  • CentOS vps is randomly rebooting

    - by develroot
    I have a centos vps (Parallels Virtuozzo container) which has been running for months. However, a few days ago it started to randomly reboot itself, and i can't find out why. And the biggest problem that i don't understand is that it takes 40 minutes to reboot (as far as i can see in the logs) root ~ # cat /var/log/messages | grep shutdown Oct 11 13:52:11 vps27 shutdown[23968]: shutting down for system halt Oct 14 14:55:17 vps27 shutdown[30662]: shutting down for system halt Oct 15 06:21:23 vps27 shutdown[20157]: shutting down for system halt And notice the time difference between shutdown and xinetd's start: Oct 15 06:21:23 vps27 shutdown[20157]: shutting down for system halt Oct 15 06:21:24 vps27 init: Switching to runlevel: 0 Oct 15 06:21:27 vps27 saslauthd[30614]: server_exit : master exited: 30614 Oct 15 06:21:38 vps27 named[30661]: shutting down Oct 15 06:21:47 vps27 exiting on signal 15 Oct 15 07:04:34 vps27 syslogd 1.4.1: restart. Oct 15 07:05:06 vps27 xinetd[1471]: xinetd Version 2.3.14 started with libwrap loadavg labeled-networking options compiled in. Oct 15 07:05:06 vps27 xinetd[1471]: Started working: 0 available services And here's what Parallels Power Panel says in terms of Status Changes: Time Old Status Status Obtained Oct 15, 2011 06:23:46 AM Mounted Down Oct 15, 2011 06:22:31 AM Running Mounted Oct 14, 2011 03:06:48 PM Starting Running Oct 14, 2011 03:06:23 PM Down Starting Oct 14, 2011 03:06:08 PM Mounted Down Oct 14, 2011 02:58:24 PM Running Mounted For some reason it's getting into Mounting mode and then restarts itself. The only problem that i can imagine is disk space utilization, which is now 84%. But can that be a reson for system halt? Time Category Details Type Parameter Oct 15, 2011 07:08:33 AM Resource Resource counter_disk_share_used yellow alert on environment vps27 current value: 82 soft limit: 85 hard limit: 95 Yellow zone counter_disk_share_used Oct 15, 2011 06:27:23 AM Resource Resource counter_disk_share_used yellow alert on environment vps27 current value: 82 soft limit: 85 hard limit: 95 Yellow zone counter_disk_share_used Oct 15, 2011 06:23:50 AM Resource Resource counter_disk_share_used green alert on environment vps27 current value: 0 soft limit: hard limit: 0 Green zone counter_disk_share_used Oct 14, 2011 03:06:24 PM Resource Resource counter_disk_share_used yellow alert on environment vps27 current value: 83 soft limit: 85 hard limit: 95 Yellow zone counter_disk_share_used Oct 14, 2011 03:05:50 PM Resource Resource counter_disk_share_used green alert on environment vps27 current value: 0 soft limit: hard limit: 0 Green zone counter_disk_share_used

    Read the article

  • Outbound mail issue during Exchange 2003 migration

    - by user27574
    Dear all, I am having an outbound email issue during the Exch 03 migration. Basically, we are migrating Exch03 to new hardware, both servers are Server 03 based. Everything runs smooth while setting up and installing Exch 03 on the new box. Public folders are all replicated. My issues are shown below.... 1) After starting to move users' mailboxes to new Exch 03, they receive some undeliverable mail and bounced back mail from some vendors, then I move few users back to test around, they have no problem at all after moving back to old Exch 03. 2) Another issue is our company has Blackberry users, we don't have BES. Under each user's mailboxes, we have forward rule setup, so that both user inbox and BB can receive email. User who is moved to the new Exch 03 server, they can only send email to the BB user's inbox, mail cannot be forwarded to BB at all, smtp queue stacks up and keep trying until the time is expired. Since not all emails that the users send out from the new Exch have problem, I am not able to narrow down what is the issue here. Can anyone give me some ideas? Could this be MX record / Reversed DNS relate? Or firewall NAT rule setting? Thanks.

    Read the article

  • How to extend a Linux PV partition online after virtual disk growth

    - by Yves Martin
    VMware allows to extend the size of a virtual disk online - when the VM is running. The next expected steps for Linux system are: extend the partition: delete and create a larger one with fdisk extend the PV size with pvresize use free extents for lvresize operations and then resize2fs for file system But I am stuck on the first step: fdisk and sfdisk still display the old size for the disk. My disk is a SCSI virtual disk connected thanks to the virtual LSI Logic controller. How to refresh the virtual disk size and partition table information available in Linux kernel without reboot ? As far as I know all that steps are possible for a running Windows, without reboot and even without any user actions thanks to VMWare tools. On Linux, I expects to do all steps online too and I already know steps 2, 3 and 4 work online. But the first one - change partition size declared in the partition table (still) seems to require a reboot. Update: My system is a Debian Lenny with kernel 2.6.26 and the disk I have extended is the main disk with a large PV containing the "root" LV for "/".

    Read the article

  • Finding useful crash-information in Windows 8 Consumer Preview

    - by Lukas Knuth
    I'm currently diving into C# and wanted to play around with the new Metro-styled-applications introduced with Windows 8, so I updated my Windows 7 to Windows 8 Consumer Preview. The problem I'm facing right now is, that the system freezes after 3-5 minutes. It does not take any input from the keyboard or mouse and it does not recover (at least not in less then 10 minutes). Since I have a background in Linux, I'd like to find some information about the cause of the freeze, but I have no idea where to search. I checked the system-logs (under "System Control" - "Management") but they only record that the system was shut down unexpectedly (doe to the face that I held down the power-button to reboot the PC). There is no useful crash-information in there. I don't want to spend hours on randomly reinstalling drivers and doing things that "might help". Isn't there any place I can find some useful information about the freeze? Before you ask: I installed Windows 8 as an updated on my old Windows 7 installation (which worked fine by the way). My hardware fits the minimum requirements (specs can be found here, the MacMini 3,1 model with 2GHz processor). I have updated the graphics-card drivers to the newest Windows 8 drivers from nVidia.

    Read the article

  • Windows Domain Chaos - Any Solving Approach

    - by Chake
    we are running an old Window 2003 Server as Domain Controller (DC2003). To safely migrate to Windows 2008 R2 we added a 2008 R2 (DC2008R2) to the domain as domain controller (adprep etc.). After dcpromo on DC2008R2 everything seemed to be ok. The new DC appeared under the "Domain Controlelrs" node. It wasn't checked at this time, if DC2008R2 can REALLY act as domain controller. Later we tried to shutdown DC2003 and ran into a total mess with non functional Exchange and Team Foundation Services. After that I got the job to fix... First i thought it could be an Problem with DC2008R2. So I removed it as Domain Controller and installed a new Windows 2008 R8 Server DC2008R2-2. I ran into similar Problems. I tried a bunch of stuff, but nothign helped. I won't list it, maybe I made an mistake, so I'm willing to redo it with your suggestions. To have a starting point I tried the best practise analyser whicht ended up with 24 "Compatible" and 26 "Not Compatible" tests. From these 26 tests 19 read the same. (I'm translating from german, so that may to be the exact wording) Problem: Using the Best Practise Analyser for Active Directory Domain Services (Active Directory Domain Services Best Practices Analyzer, AD DS BPA) no data can be be gathered using the name of the forest and the domain controller DC2008R2-2. I appreciate any suggestions, this really bothers me.

    Read the article

  • Samsung laptop randomly shuts down

    - by Dmatig
    I've rewritten this question because it turned into an indecipherable mess. I have a Samsung R560 laptop that is overheating, and shutting itself down under load consistantly. Thank you quickcel for reccomending me Speedfan to monitor my temps. Here they are (Load / Idle): (Ignore "Temp1 and Temp2", whatever sensors they are they'd always random, pretty sure they're broke). The load temperature is after just 5 minites of playing Fallout 3 - another 5 minutes and it (the GPU - 9600M GS) consistantly breaches the mid 90's then shuts down, so it's hard to get a good picture of it. I'm looking for some solution or way to decrease these temperatures, because they seem far too high even idle. I've tried: Opening up the case and clearing of all dust with compressed air. Updating drivers for my Graphics card Have purchased and am using a notebook cooler I don't want to: Undervolt / underclock (defeats the point of having a more expensive card) Use lower power / performance settings (again, i might as well have bought something cheaper) Is there anything else i can try (software or inexpensive hardware) that can help me fix this? Has anybody had a Samsung laptop and knows if this can be sorted under my warranty, and the turnaround time of sending it off (UK?)(it has always ran hotter than it should, but now at 6 months old is getting hot enough to power off)

    Read the article

  • How do I fix a super slow MacBook?

    - by MakingScienceFictionFact
    I'm running a black MacBook 4.1. Intel Core 2 Duo @ 2.4 GHz, 2 GB RAM, 250 GB hard disk drive, bus speed is 800 MHz. It's about three years old in excellent shape externally. I treat this thing like a baby. It used to run awesome, but now it's super slow at everything. I get the spinning pizza of death constantly. It takes a long time to boot up or load any program, even Safari and iTunes. iPhoto is terribly slow. The Internet doesn't work properly and it reminds me of a buggy PC. I've formatted it and re-installed Mac OS X 10.6 (with all updates), and I've done the disk repairs process. As an iOS developer this is driving me crazy, but luckily I have an iMac to work on in the day which is fast. I'm ready to format it again, but that didn't work last time. After the last format, I copied back files from an external drive so maybe the offending files were hidden in there somewhere. Here are the hard disk drive and RAM specifications. It is upgrade-able to 4 GB of RAM. Hard disk drive: The Fujitsu Mobile MHY2250BH is a 250 GB, standard hard disk drive. Its burst transfer rate is 150 Mbyte/s. This is a 5400 RPM drive and comes with an 8 MB buffer. RAM: two sticks of 1 GB DDR2 SDRAM, speed: 667 MHz.

    Read the article

  • Recovering drive via boot to Win7 setup command prompt

    - by Valamas
    I am trying to recover data from two old IDE drives. Drive1 has been successful, but something is wrong with Drive2. It does not appear as a drive letter. Due to limited legacy hardware, the only way i can see these drives is to boot using windows 7 setup and goto the command prompt. Without going further as to why, my question is how i can access the data in this command prompt. I discovered DISKPART command and while a first time user, it looked like something that can fix my problem. Here are the results of my diskpart commands. At the bottom is a image of the commands taken with a camera. The Drive2 is present because when using the diskpart command, I can see it. How can I copy the information using a robocopy script if the drive letter is not available? how can I assign a drive letter? Is there any repair command I need to execute? When i execute DISKPART, the following is what i see. DISKPART> LIST DISK Disk### Status Size Free Disk 5 Online 37 GB 2048 KB So then I select disk 5. DISKPART> SELECT DISK 5 "Disk 5 is now the selected disk" When I list partition DISKPART> LIST PARTITION Partition ### Type Size Partition 1 Primary 101 MB Partition 2 Primary 37 GB So I select partition 2 "Partition 2 is now the selected partition." I then try to assign a drive letter DISKPART> ASSIGN LETTER=G "There is no volume specified." "Please select a volume and try again." When i list volume the drive is not present. DISKPART> LIST VOLUME Result of the above commands

    Read the article

  • How to export SQL Server data from corrupted database (with disk write error)

    - by damitamit
    IT realised there was a disk write error on our production SQL Server 2005 and hence was causing the backups to fail. By the time they had realised this the nightly backup was old, so were not able to just restore the backup on another server. The database is still running and being used constantly. However DBCC CheckDB fails. Also the SQL Server backup task fails, Copy Database fails, Export Data Wizard fails. However it seems all the data can be read from the tables (i.e using bcp etc) Another observation I have made is that the Transaction Log is nearly double the size of the Database. (Does that mean all the changes arent being written to the MDF?) What would be the best plan of attack to get the database to a state where backups are working and the data is safe? Take the database offline and use the MDF/LDF to somehow create the database on another sql server? Export the data from the database using bcp. Create the database (use the Generate Scripts function on the corrupt db to create the schema on the new db) on another sql server and use bcp again to import the data. Some other option that is the right course of action in this situation? The IT manager says the data is safe as if the server fails, the data can be restored from the mdf/ldf. I'm not sure so insisted that we start exporting the data each night as a failsafe (using bcp for example). IT are also having issues on the hardware side of things as supposedly the disk error in on a virtualized disk and can't be rebuilt like a normal raid array (or something like that). Please excuse my use of incorrect terminology and incorrect assumptions on how Sql Server operates. I'm the application developer and have been called to help (as it seems IT know less about SQL Server than I do). Many Thanks, Amit

    Read the article

  • Updating Samba From RPMs

    - by KnickerKicker
    My Red Hat Enterprise Edition 4 comes with Samba Version 3.0.10, which does not have support for the "inherit owner" attribute that is essential in implementing a Deny-Delete Write Once Read Many share (for examples, search google for a-shared-drop-box-using-samba). (BTW, if any body knows an alternative way to do it without updating samba, I'm all ears!) I am not all that comfortable building from source, and after hours of googling (no, I do not have a red hat subscription, so I cannot just run the up2date command), I found a whole bunch of rpms on http://ftp.sernet.de/pub/samba/tested/rhel/4/i386/ (Samba 3.2.15 for RHEL 4)... Next, I tried updating them with the rpm -U --nodeps command, but I got file conflict errors. So I went ahead and overwrote everything (or so I thought) by using the rpm's --force option. But no good has come of all that. /usr/sbin/smbd -V still returns the old version. As of now, rpm -qa | grep samba returns, samba3-client-3.2.15-40.el4 samba-3.0.10-1.4E.2 samba-client-3.0.10-1.4E.2 system-config-samba-1.2.21-1 samba3-3.2.15-40.el4 samba-common-3.0.10-1.4E.2 samba3-winbind-3.2.15-40.el4 I cannot remove the older ones because samba-common >= 3.0.8-0.pre1.3 is needed by (installed) gnome-vfs2-smb-2.8.2-8.2.x86_64 libsmbclient.so.0()(64bit) is needed by (installed) kdebase-3.3.1-5.8.x86_64 libsmbclient.so.0()(64bit) is needed by (installed) gnome-vfs2-smb-2.8.2-8.2.x86_64 Now thats a whole bunch of dependencies that I dare not touch :) Any and all pointer are welcome at this stage. Thanks in advance!

    Read the article

  • WordPress permalinks not working, everything seems fine

    - by javipas
    I have a WordPress blog I've migrated from another CMS, and I've being having a lot of problems with my permalinks structure: lots of articles give a 404, although they are there, somewhere, published. The site is www.muycomputerpro.com (MCP for short), and for example an article that should be found is: http://muycomputerpro.com/Actualidad/Especiales/2009-las-grandes-crecen-en-la-bolsa If I do a search on the search tool at MCP, the result is there (see EnlacesMCP-1.jpg) But when I click on the link, our 404 error page appears (see EnlacesMCP-2.jpg) The weird thing is, the article is published, and the permalink is the right one, as you can see on this screenshot of the WordPress CMS: The permalink (below the title) is correct (http://www.muycomputerpro.com/Actualidad/Especiales/2009-las-grandes-crecen-en-la-bolsa/) but it does not work. In fact, if I try to use the short link (http://www.muycomputerpro.com/?p=5023) the article does not show either. I've accessed my WordPress DB and I've search the article to see if there is something wrong there, but from what I can tell all the fields are OK, here's a screenshot: I really don't know what is causing this. The permalink structure should work (I'm using the "Custom permalink" plugin to preserve the old URLs that had a alphanumeric code at the end of the postname) and the permalink config on wordpress is "/%postname%/". I really need help :(

    Read the article

  • Does the same lame settings (--alt-preset standard) have differrent names?

    - by erikric
    I've always used windows, and therefore EAC to rip my CDs, but since I've started using Ubuntu more often, I decided to try to rip some albums there. I ended up using k3b (since I found it in the Ubuntu Software center. Tried to install RubyRipper first, but when 'sudo apt-get install ' or UDC fails, a Windows user like me is lost) The real question here is about the settings for the lame encoder. I'm used to just writing --alt-preset standard, and everything works like a charm, but the default in k3b look like this: lame -r --bitwidth 16 --little-endian -s 44.1 -h --tt %t --ta %a --tl %m --ty %y --tc %c --tn %n - %f I assume these are some sensible lame settings, and not a malicious perl script (although it looks like it). It seems to me like some of these ought to be there, and that I can not overwrite the whole thing with my good ol' --alt-preset. So, the question is do I need to replace anything, or is -h the same as old --alt-preset? Is it a difference between '--preset standard' and '--alt-preset standard'? And are those the same as -V 2?

    Read the article

  • Troubleshooting wireless client-bridged networks between two DD-WRT routers?

    - by KronoS
    I recently purchased a Buffalo N600 wireless router which came with DD-WRT pre-installed. I want to take my old wireless router a Linksys WRT54GL, also with DD-WRT pre-installed, and use it as a wireless bridge for my HTPC and Blu-Ray Player in the other room. I other words, I'm trying to connect to WIRED networks via the wireless on the routers. I followed eactly the instruction from DD-WRT's manual for 'Client Bridged' however I'm still not able to connect to two routers correctly, when the encryption is enabled (WPA2-Personal Mixed) however I am able to connect the two routers when there is NO encryption. I've checked, double checked, and triple checked that EVERYTHING is the same on BOTH routers: Routers 1 & 2 Encryption: WPA2-Personal Mixed Wireless Mode: G-Only Wireless Channel: 6 Subnet Mask: 255.255.255.0 Subnet: 192.168.1.0/254 SSID: Krono$ Primary Router #1 (Buffalo N600) IP Address: 192.168.1.1 Firewall: Enables w/ defaults DCHP: Enabled as DHCP Server Secondary Router #2 (Linksys WRT54GL) IP Address: 192.168.1.2 Firewall: Disabled as per DD-WRT instructions I'm looking for any configurations that I may have missed, or settings that may need to happen in order for this work.

    Read the article

  • Slow Local Network, Windows 7, Snow Leopard, WiFi/Wired

    - by WerkkreW
    Hello - I am experiencing really poor local network performance in my home. I was recently using a Linksys WRT54G Router with DD-WRT on it, and a couple comparable Linksys-G PCI cards for connectivity but decided to upgrade hoping it would help with my performance issues. The computers in my house are connected as follows: Comcast Business Class Commercial 25mbps/10mbps (Verified with SpeakEasy and Speedtest.net) D-Link DGL-4500 Wireless N Router Windows 7x64 - D-Link DWA-552 Wireless-N Windows 7x64 - D-Link DWA-552 Wireless-N Mac Mini 10.6.2 - AirPort Extreme N Playstation 3, Hard Wired Xbox 360, Hard Wired Essentially the problem is very specific. Web browsing and uploading/downloading files from the internet is fine, more than fine. But if I want to say, Stream a video from one of my Windows 7 computers to my PS3, or copy a large video file between either of the PC's or the Mac, I get a consistent 500-900Kbps throughput at the high end. If I open my network browser, or try to browse my homegroup the response time is horrible. Both of my Windows computers are showing Strong wireless signals with a connection speed of 300Mbps. I know I can never expect to achieve anything near those speeds, but 500Kbps? Here is what I have tried so far: Enabled Single mode N-only and N/G Only on router WPA2 with AES Encrpytion Disabled "Remote Differential Compression" in Windows 7 Disabled TCP "Auto-Tuning" Used other software for file copies such as "Teracopy" I am at the end of my rope. Unfortunately I live in a 75 year old home with plaster walls, so hard-wiring my entire house isn't really an option I can handle right now. Any ideas to help me get decent speed when transferring files across my network would be greatly appreciated.

    Read the article

  • Swapping out a hardware firewall does the mac address get cached?

    - by Dan
    We need to replace a hardware firewall (cisco pix) and have a spare that we will use (temporarily). The firewall sits in front of a couple of web-servers colocated at a data-centre. The replacement will be configured with identical settings (external/internal IP addresses, configured ports etc.). When we swap the firewalls over, will this work immediately or will the old Pix's mac address be cached and the new firewall not be seen until the cache is cleared? (What is it though that is caching the address? Is it just the switch/router that our pix is connected to?) Reason for asking is a few years ago I had a smoothwall firewall in front of a lone server (the external IP of the smoothwall was also the external IP of the web-server). When I replaced the smoothwall with a pix, the IP address of the web-server stayed the same but it now had to be reached via the new firewall on a different IP. It took about 2-4 hours before the rest of the world could see that web-server again. I'm hoping for less downtime this time!

    Read the article

  • Network card/driver stops under heavy load

    - by Uwe Keim
    Since about approx. 2 month, I do have the following issue with my approx. 1 year old development machine (Windows 7, 64 bit): When doing network intensive operations, like e.g. executing some SQL script on a remote SQL server to select or update 1000 of records, the network card stops working. I.e. suddenly, No network connection is present anymore. No internet, no local connection, simply nothing. The only resolution so far I found is to disable my network card and then simply enable it back, like in the following screenshots: 1.) Click "Deactivate" 2.) Click "Activate" (German screenshots only, sorry) Now this is an acceptable solution to work around this issue, but I would love to have this fixed, since it suddenly stops me from working when I'm connected remotely via VPN/RDP on my machine (Win7 64bit). So my question is: Could you imagine a possible cause for this issue and give some hints how to hunt/resolve it? I could imagine that this is a driver issue, a hardware issue or even some kind of background software issue like a software firewall or a virus scanner.

    Read the article

< Previous Page | 375 376 377 378 379 380 381 382 383 384 385 386  | Next Page >