Search Results

Search found 4846 results on 194 pages for 'vertical sync'.

Page 145/194 | < Previous Page | 141 142 143 144 145 146 147 148 149 150 151 152  | Next Page >

  • Folder Redirection/Offline Files on Win 7 | Folders are empty when not connected to the domain

    - by Matt
    I've been struggling with this issue for days and cannot seem to find anyone else with a similar issue. I will note first that I have tried using both roaming profiles and the group policy setting for force local profiles.... now onto the problem. What I am trying to do is have my teachers accounts log onto their laptops using their domain credentials. Once logged in their desktop and documents are redirected to a network share //server/redirects/documents/. This works fine when the computer is connected to the domain network. Offline File Sync works great and caches the files locally. However this all breaks down when the user logs in when the computer is no longer connected to the domain network. When the user logs in the desktop and documents are empty. What I find very odd is if I manually go to the offline file folder all of the files are there, The group policy folder redirection does not execute to the offline folder. Is this by Design? (It does not work like this on Vista, I have the exact same group policy settings set on vista machines and it works flawlessly). Additional Info When I look at the event log there is no folder redirection events at all when user logs in and is not connected to the network. In addition a new profile is create in c:/users/username.domain.00x. Every log in creates an additional profile. There is a event that states that a registry files were still in use. Any help would be appreciated.

    Read the article

  • what web based tool, to allow a non-technical user to manage authorized keys files on a Linux (fedora/centos/ubuntu/debian) server

    - by Tom H
    (Edit: clarification below) We have a number of groups of developers that change frequently, and a security policy to require individual logins to servers using rsa or dsa public keys, which is achieved via the standard method of adding id_dsa.pub to their authorized keys file. I am using chef to sync the user accounts across machines, however our previous method of using webmin to manage the user passwords is not designed for key based auth, and hence is not easy to use for non-technical users. The developers are logging in from the WAN using ssh, they can either provide their own key, or an administrator will send them a private key. The development machines are located in the cloud and we have a single server available to host the master set of accounts. Obviously I could deploy ldap or other centralised authentication system, but that seems a bit over blown when webmin worked well for the simple case. It is easy to achieve synchronised users, groups and passwords across a bunch of low security development boxes using webmin clustered users and groups. However looking at the currently installed webmin it is not so easy to create the authorized keys as it is to create user accounts and passwords. (its possible, but its not easy - some functionality is in the usermin module, or would required some tedious steps) Ideally I'd like a web interface that is pretty much dedicated to creating users and groups, and can generate key pairs on the fly, and can accepted pasted in public keys to add to the users authorized keys file. If the tool sync'ed the users and keys as well, that would be great, but I can use chef to do that part if the accounts are created correctly on the "master" server.

    Read the article

  • Thin client - cloud machine - to run via iPad, iPhone, most Androids etc

    - by Carl Lindberg
    I'm tired of having a laptop macbook that breaks down or having files that I need to sync via dropbox etc all the time via the machines to different OS installations. It sucks. I want a thin client where I can login on any machine - my iPhone, PC desktop, iPad etc to one running machine. I would like to replace a modernly powerful desktop iMac with a thin client running via my iPad. I will connect the iPad with a keyboard/mouse too so you get the idea. But I want to be able to use some of the Android phones as well (I guess most Android phones today has a good enough performance/resolution etc to run a thin client). Of course it has to be able to have input/output in sound. Printing can be solved by PDF/emailing etc - so no direct communication to the printer ports to USB etc is necessary. Is there such a service today? It should cost somewhere under something like $40/ month. I will run stuff like CPU heavy duty ableton for music production, xCode for making iOS apps, some games etc. And on the thin client also run virtual machines. VM of Ubuntu and Windows.

    Read the article

  • Primary zone will not transfer to secondary zone

    - by Matt Beckman
    Using DNS on Windows Server 2008, there is a constant struggle with adding primary and secondary zones. I will add a primary zone to NS1 for a new domain, edit it as needed, and when it's ready add the secondary zone to NS2. However, MOST of the time, the secondary zone remains in an error state, and will never acquire the primary zone data. I have gone back to domains a few weeks after adding them to find out that Windows never propagated the change. Annoying. Anyway, I recently updated SP1 to SP2 thinking this would help, but it hasn't. I added two new domains today, and spent an hour after the secondary zone would just not sync. During that time, the only error in the logs I had seen was for one of them where DNS complained about not being authoritative. In order to eventually resolve the issue, I ended up deleting the primary zone, creating a new primary zone, and hitting "Apply" after each and every field change. For example, after modifying the serial number from "1" to a date appropriate "2010093001", I hit apply, and then the Primary Server (apply), Responsible Person (apply), and finally Name Servers (apply). After I did this, the secondary zone didn't waste any time getting the data. Ideas?

    Read the article

  • File corruption after copying files in Windows 7 64 bit using two methods

    - by DustByte
    I have 5000 pictures and other files in a directory taking up 35 GB. I want to duplicate this directory. Method 1: I do a simple copy and paste of the directory in explorer. I have the habit of checking the checksums after copying important files. In this case I noticed that around 2000 files failed the MD5 test. At a closer inspection of a randomly chosen JPEG with different checksums it turns out that some XMP metadata had changed. In particular, the tag <MicrosoftPhoto:DateAcquired> had changed the date from 2009 to today (possibly around the time I was copying the files). I have no idea what triggered this XMP data to be changed and exactly when it was changed and why for these particular files, but at least it seems to explain the checksum discrepancy. Method 2: As I want the exact files to be duplicated, I tried the program FreeFileSync to mirror the directory, hoping no XMP metadata would mysteriously change. A checksum test in addition to a thorough file comparison test in FreeFileSync lead to two similar but yet different results: 31 files fail the checksum test, 23 files fail the file comparison test. The smaller set is not entirely contained in the bigger set, although many files occur in both. What is alarming here is that not only JPEGs are flagged as altered but also som AVIs, MPGs and a large 7-zip file. Closer inspection of a JPEG indicates that it is indeed corrupt: the bottom half of the picture is simply plain gray. Due to the size of the 7-zip file, I have not been able to pin down the discrepancy. Note, in both methods, every file has its correct file size after being copied. Question: Any thoughts on what is possibly going on here? I have never had this problem before, and I am now terrified that files get corrupted after simple actions like copy/paste and file sync. Even if I manage to successfully copy the files somehow, I would still like an explanation to this.

    Read the article

  • MySQL encoding problem after site move

    - by Quan Zhou
    Guys, I need your help. Since last month my friend has lost his database on Dreamhost, he decided to move his wordpress based blog site (written in Chinese) to my server. He's using a wp-plugin called wp-db-backup to perform regular db backups. And the servers backgrounds are: Dreamhost: Linux 2.6.31.5-modsign-aufs2-grsec-2-opt mysql Ver 14.12 Distrib 5.0.16, for pc-linux-gnu (i386) using readline 5.0 apache2 unknown version My Server: Linux li159-46 2.6.32.12-x86_64-linode12 mysql Ver 14.14 Distrib 5.1.45, for debian-linux-gnu (x86_64) using readline 6.1 nginx 0.8.36 His site's encoding was UTF-8 in both wp-config and db. I imported his db backup file in UTF-8 by default, then I sync'd files using rsync from dreamhost, then I just changed the db address and nothing more. But when I take first look at the "new" site, it was full of unreadable characters, I met this problem before, I changed charset options in browser but none of them can make it displayed properly. Then I converted his db to GB18030, it works with only if browser set charset to GB18030 either GBK, but by default they recognize the charset as UTF-8. I tried to edit the headers but it doesn't work. What could I do now? Thx~~

    Read the article

  • Best way to 'harden' embedded ext4 file server against unexpected loss of power?

    - by Jeremy Friesner
    Hi all, First, a little background: my company makes an audio streaming device that is a headless, rack-mounted Linux box with a couple of SSDs attached. Each SSD is formatted with ext4. The users can connect to the system using Samba/CIFS to upload new audio files or access existing ones. There is also custom software for streaming out audio over the network. This is all fine. The only problem is that the users are audio people, not computer people, and see the system as a 'black box', not as a computer. Which means that at the end of the day, they aren't going to ssh in to the box and enter "/sbin/shutdown -h"; they are just going to cut power to the rack and leave, and expect things to still work properly the next day. Since ext4 has journalling, journal checksumming, etc, this mostly works. The only time it doesn't work is when someone uploads a new file via Samba and then cuts power to the system before the uploaded data has been fully flushed to the disk. In that case, they come in the next day and find that their new file has been truncated or is missing entirely, and are unhappy. My question is, what is the best way to avoid this problem? Is there a way to get smbd to call "sync" at the end of every upload? (Performance on uploads isn't so important, since they only happen occasionally). Or is there a way to tell ext4 to automatically flush within a few seconds of any change to a file? (Again, performance can be sacrificed for safety here) Should I set a particular write-ordering mode, activate barriers, etc?

    Read the article

  • Samba PDC share slow with LDAP backend

    - by hmart
    The scenario I have a SUSE SLES 11.1 SP1 machine as Samba master PDC with LDAP backend. In one share there are Database files for a Client-Server application. I log XP and Windows 7 machines to the local domain (example.local), the login is a little slow but works. In the client computers have an executable which opens, reads and writes the database files from the server share. The Problem When running Samba with LDAP password backend the client application runs VERY SLOW with a maximum transfer rate of 2500 MBit per second. If disable LDAP the client app speed increases 20x, with transfer rate of 50Mbit/sec and running smoothly. I'm doing test with just two users and two machines, so concurrency, or LDAP size shouldn't be the problem here. The suspect LDAP, Smb.conf [global] section configuration. The Question What can I do? I've googled a lot, but still have no answer. Slow smb.conf WITH LDAP [global] workgroup = zmartsoft.local passdb backend = ldapsam:ldap://127.0.0.1 printing = cups printcap name = cups printcap cache time = 750 cups options = raw map to guest = Bad User logon path = \\%L\profiles\.msprofile logon home = \\%L\%U\.9xprofile logon drive = P: usershare allow guests = Yes add machine script = /usr/sbin/useradd -c Machine -d /var/lib/nobody -s /bin/false %m$ domain logons = Yes domain master = Yes local master = Yes netbios name = server os level = 65 preferred master = Yes security = user wins support = Yes idmap backend = ldap:ldap://127.0.0.1 ldap admin dn = cn=Administrator,dc=zmartsoft,dc=local ldap group suffix = ou=Groups ldap idmap suffix = ou=Idmap ldap machine suffix = ou=Machines ldap passwd sync = Yes ldap ssl = Off ldap suffix = dc=zmartsoft,dc=local ldap user suffix = ou=Users

    Read the article

  • Firefox 29 - how do I delete history entries visited fewer than x times

    - by lousyuser
    Context: I've been using my Firefox profile for a couple of years now. My history file has become huge, naturally. I got Firefox Sync set up between my main desktop PC and my laptop. HW configs: PC: i5-3450, 8 GB DDR3 RAM, Crucial M4 128 GB SSD laptop: Pentium SU4100, 4 GB DDR3 RAM, WD 5400 rpm HDD Accessing history entries when typing into the Awesome Bar on my desktop takes quite a long time despite the decent config, the laptop is even slower. The experience is quite unresponsive. I figured if I cleared the history up a little bit, I might avoid creating a new profile to speed things up. The question itself: to illustrate: Is there a way to delete all history entries that have been visited fewer than x (let's say 5) times and at the same time the recent visit is fewer than y (let's say 120) days old? afaik the history file is some kind of SQL database, but I'm not really sure how the data is saved, if there's a "safe way" to edit it and what the query to do what I need would look like. Thanks in advance for any help. I kept browsing through previous SuperUser questions to see if I could find relevant information. "In my Firefox profile directory, there is a filed named places.sqlite. Opening it with sqlite reveals (amongst others) the tables moz_places and moz_historyvisits. It seems that moz_historyvisits uses the primary of moz_places to refer to the URLs." As I'm unfamiliar with databases, I don't really understand the way the two tables mentioned in the quote are related. screenshot of a part of the tables I've noticed the visit_count is in a standard format, making it easy to work with. The last_visit_date looks encrypted to my naked eye, but I can't see in which way. Hope that helps, I'm at my wits' end.

    Read the article

  • Users database empty after Samba3 to Samba4 migration on different servers

    - by ouzmoutous
    I have to migrate a Samba 3 to a new Samba 4 server. My problem is that the database on the samba 3 server seems a bit empty. The secrets.dtb file is only 20K whereas the “pbedit -L |wc -l”command give me 16970 lines. On my Samba3 /var/lib/samba is 1,5M After I had migrate the databse (following instructions on http://dev.tranquil.it/index.php/SAMBA_-_Migration_Samba3_Samba4), “pdbedit -L” command on the new server give me only : SAMBA4$, Administrator, dns-samba4, krbtgt and nobody. So I tried to create a VM with a Samba3. I added some users, done the same things I did for the migration and now I can see the users created on the VM. It’s like users on the Samba 3 server are in a sort of cache. I already migrate the /etc/{passwd,shadow,group} files and I can see users with the “getent passwd” command. Any ideas why my users are present when I use pdbedit but the database is so empty ? The global part of my smb.conf on the Samba 3 server : [global] workgroup = INTERNET netbios name = PDC-SMB3 server string = %h server interfaces = eth0 obey pam restrictions = Yes passdb backend = smbpasswd passwd program = /usr/bin/passwd %u passwd chat = *new* %n\n *Re* %n\n *pa* username map = /etc/samba/smbusers unix password sync = Yes syslog = 0 log file = /var/log/samba/log.%U max log size = 1000 socket options = TCP_NODELAY SO_RCVBUF=8192 SO_SNDBUF=8192 add user script = /usr/sbin/useradd -s /bin/false -m '%u' -g users delete user script = /usr/sbin/userdel -r '%u' add group script = /usr/sbin/groupadd '%g' delete group script = /usr/sbin/groupdel '%g' add user to group script = /usr/sbin/usermod -G '%g' '%u' add machine script = /usr/sbin/useradd -s /bin/false -d /dev/null '%u' -g machines logon script = logon.cmd logon home = \\$L\%U domain logons = Yes os level = 255 preferred master = Yes local master = Yes domain master = Yes dns proxy = No ldap ssl = no panic action = /usr/share/samba/panic-action %d invalid users = root admin users = admin, root, administrateur log level = 2

    Read the article

  • Robocopy silently missing files

    - by John Hunt
    I'm using Robocopy to sync data from our server's hard disk to an external disk as a backup. It's a pretty simple solution but pretty much the best/easiest one we could come up with - we use two external disks and rotate them offsite. Anyway, here's the script (with the comments taken out) that I'm using to do it. It works very well, it's quick and almost 100% complete - however it's acting pretty strange with a few files (note company name has been changed in paths to protect the innocent): @ECHO OFF set DATESTAMP=%DATE:~10,4%/%DATE:~4,2%/%DATE:~7,2% %TIME:~0,2%:%TIME:~3,2%:%TIME:~6,2% SET prefix="E:\backup_log-" SET source_dir="M:\Company Names Data\Working Folder\_ADMIN_BACKUP_FILES\COMPA AANY Business Folder_Backup_040407\COMPANY_sales order register\BACKUP CLIENT FOLDERS & CURRENT JOBS pre 270404\CLIENT SALES ORDER REGISTER" SET dest_dir="E:\dest" SET log_fname=%prefix%%date:~-4,4%%date:~-10,2%%date:~-7,2%.log SET what_to_copy=/COPY:DAT /MIR SET options=/R:0 /W:0 /LOG+:%log_fname% /NFL /NDL ROBOCOPY %source_dir% %dest_dir% %what_to_copy% %options% set DATESTAMP=%DATE:~10,4%/%DATE:~4,2%/%DATE:~7,2% %TIME:~0,2%:%TIME:~3,2%:%TIME:~6,2% cscript msg.vbs "Backup completed at %DATESTAMP% - Logs can be found on the E: drive." :END Normally the source would just be M:\Comapany name data\ but I altered the script a bit to test the problem. The following files in the source are not copied to the dest: Someclient\SONICP~1.DOC Someclient\SONICP~2.DOC Someclient\SONICP~3.DOC However, files in the same directory named: TIMESH~1.XLS TIMESH~2.XLS are copied. I'm able to open the files that aren't copied with no trouble at all, and they certainly weren't opened when I ran robocopy so it's not a locking issue. Robocopy is running as administrator so it's not a permissions issue. There's no trace these files were even attempted to be copied as there are no errors being output in the log or in my command prompt. Does anyone have any suggestions as to what this might be? Busted hard disk? Cheers, John.

    Read the article

  • How can I fix video tearing and pausing on Windows XP Flash videos?

    - by xvs
    I have what should be a reasonably fast PC: it's a Quadcore Intel 6600 at 2.4 GHz, 4MB of RAM, an ATI 3800 series video card and an LG L246WP monitor, which I selected particularly because it was supposed to work well with video and have no trails or other artifacts. So I should be able to play video with no problems. And I can, as long as that video isn't Flash video. With Flash, what I see is tearing, especially during pans, and pausing -- every few seconds the video pauses for about 300ms while the sound stays continuous. I tried going into the video card setup and changing vertical sync, pulldown detection, windows media video acceleration, deinterlacing and triple buffering. but no combination of settings I've tried has changed or corrected the problem in any way. I've also tried enabling and disabling hardware acceleration in the Flash settings, to no avail. This problem happens whether or not the video is streaming or has fully streamed in before playing. So, what can I do? Is this just a flash issue or is there a way to get it to work?

    Read the article

  • Managing multiple Apache proxies simultaneously (mod_proxy_balancer)

    - by Hank
    The frontend of my web application is formed by currently two Apache reverse proxies, using mod_proxy_balancer to distribute traffic over a number of backend application servers. Both frontend reverse proxies, running on separate hosts, are accessible from the internet. DNS round robin distributes traffic over both. In the future, the number of reverse proxies is likely to grow, since the webapplication is very bandwidth-heavy. My question is: how do I keep the state of both reverse balancers / proxies in sync? For example, for maintenance purposes, I might want to reduce the load on one of the backend appservers. Currently I can do that by accessing the Balancer-Manager web form on each proxy, and change the distribution rules. But I have to do that on each proxy manually and make sure I enter the same stuff. Is it possible to "link" multiple instances of mod_proxy_balancer? Or is there a tool out there that connects to a number of instances, and updates all with the same information? Update: The tool should retrieve the runtime status and make runtime changes, just like the existing Balancer-Manager, only for a number of proxies - not just for one. Modification of configuration files is not what I'm interested in (as there are plenty tools for that).

    Read the article

  • Optimal file system type and mount options for an rsnapshot dedicated drive

    - by Nimmy Lebby
    We have an external USB 2 drive that we are using as a backup drive for our configuration. We use rsnapshot for the backups. It uses a few standard commands for managing snapshots: rm -rf: deletes expired snapshots mv: moves older snapshots down a slot cp -al: duplicates last snapshot to new slot rsync -a --delete --numeric-ids --relative: synchronizes new snapshot As you could see by the log below, the majority of the time is spent on the rm -rf and the cp -al steps: [25/Dec/2010:14:00:02] rsnapshot hourly: started [25/Dec/2010:14:00:02] echo 21012 > /var/run/rsnapshot.pid [25/Dec/2010:14:00:02] rm -rf /mnt/extdrive/snapshots/hourly.5/ [25/Dec/2010:14:15:48] mv /mnt/extdrive/snapshots/hourly.4/ /mnt/extdrive/snapshots/hourly.5/ [25/Dec/2010:14:15:48] mv /mnt/extdrive/snapshots/hourly.3/ /mnt/extdrive/snapshots/hourly.4/ [25/Dec/2010:14:15:48] mv /mnt/extdrive/snapshots/hourly.2/ /mnt/extdrive/snapshots/hourly.3/ [25/Dec/2010:14:15:48] mv /mnt/extdrive/snapshots/hourly.1/ /mnt/extdrive/snapshots/hourly.2/ [25/Dec/2010:14:15:48] cp -al /mnt/extdrive/snapshots/hourly.0 /mnt/extdrive/snapshots/hourly.1 [25/Dec/2010:14:23:32] rsync -a --delete --numeric-ids --relative /etc /mnt/extdrive/snapshots/hourly.0/sm4/ [25/Dec/2010:14:23:52] touch /mnt/extdrive/snapshots/hourly.0/ [25/Dec/2010:14:23:52] rm -f /var/run/rsnapshot.pid [25/Dec/2010:14:23:52] rsnapshot hourly: completed successfully My questions: I'm currently using ext4 for the filesystem. Maybe this is not the best choice from those available in Red Hat. Anyone have any recommendations that would speed up the process? The partition's mount options are sync,dirsync 1 2. Is there a way to optimize this since it's solely used for rsnapshot? Of course, reasoning would be greatly appreciated.

    Read the article

  • svnsync loses revision properties although hook installed

    - by roesslerj
    Hello all! I have a pretty weird problem. We have setup an SVN-Mirror via cronjob (because it needs to go from inside to outside of a firewall, so no post-commit-hook possible) and svnsync. We installed a pre-revprop-hook just as told. Everything seems to work fine, except that it doesn't. E.g. when manually executing the script. # svnsync --non-interactive sync file://<path-to-mirror> --source-username <usr> --source-password <pwd> Committed revision 19817. Copied properties for revision 19817. No error, no complaints. But if checking for the revision properties it says: # svnlook info <path-to-mirror> 0 # svn info -r HEAD file://<path-to-mirror> 2>&1 Path: <root-of-mirror> URL: file://<path-to-mirror> Repository Root: file://<path-to-mirror> Repository UUID: <uid> Revision: 19817 Node Kind: directory Last Changed Rev: 19817 So somehow the author and timestamp information gets lost. But we need that information for our internal processes. Since no error or warning is produced I have absolutely no idea even where to start to look. Everything is local (except for the remote master), so there are no server-logs to look at. I also tried to manually recopy via svnsync copy-revprops (http://chestofbooks.com/computers/revision-control/subversion-svn/svnsync-Copy-revprops-Ref-svnsync-C-Copy-revprops.html). It says Copied properties for revision 19885. But when I query them, it's just the same. Any ideas how I could approach that problem, or even better -- how to solve it? Any ideas appreciated.

    Read the article

  • How can I download a copy of an S3 public data set?

    - by tripleee
    i was naively assuming I could do something like s3cmd sync s3://snap-d203feb5 /var/tmp/copy but I seem to have the wrong idea of how to go about this. I cannot even get a simple thing to work; vnix$ s3cmd ls s3://snap-d203feb5 Bucket 'snap-d203feb5': ERROR: Bucket 'snap-d203feb5' does not exist I guess the identifier I have is not for a "bucket" but for a "public data set". How do I go from one to the other? Do I have to start up an EC2 instance and create a bucket for this? How? The instructions at http://docs.amazonwebservices.com/AWSEC2/latest/UserGuide/using-public-data-sets.html seem to assume I want to use the data in an EC2 instance, but in this case, I'd just like to browse a bit, at least for a start. By the by, copy/pasting the "US Snapshot ID" causes a nasty traceback from Python; they publish the ID with a weird Unicode (I presume) dash which cannot directly be copy/pasted. Is there a mistake when I copy it? And what's the significance of "US" in there? Can't I use the data outside North America??

    Read the article

  • How can I make the XAnalogTV xscreensaver fill my screen?

    - by Breakthrough
    I recently installed xscreensaver, as well as the additional/extra screensavers. Many of the OpenGL ones function correctly, going fullscreen as expected. However, for some reason, the XAnalogTV screensaver leaves two "blank" spots on the edges of my screen. If I manually launch XAnalogTV, it displays a window, which it fills correctly. When I maximize the window, the same effect occurs: the window maximizes, but the two edges of the screen are literally "transparent". This effect also occurs when the screensaver is set to fullscreen. For these reasons, I believe the problem may be related to the aspect ratio of the screen. The edges of the screen are literally "ignored", with nothing being drawn there. Specifically, note the transition between the maximized and full-screen screenshots (with the un-drawn whitespace shrinking as the vertical height has been increased). For reference, I am running Xubuntu 12.04 on a Dell Vostro 1520 (Intel P8600, Nvidia 9300M) with a 1440 x 900 display (16:10). I have also set the GetViewPortIsFullOfLies preference to true. Is there any way to force XAnalogTV to fill my entire screen? Alternatively, as I believe the problem is aspect-ratio related, is there any way I can get the screensaver to render larger than my display, and simply discard the extra pixels? Relevant screenshots (windowed, maximized, and full-screen, respectively): You can see in the last two that the scrollbar from Firefox is clearly visible, even though this is a full-screen screensaver.

    Read the article

  • Restore passwd for root on a server

    - by s.mihai
    Hello,       I have a DVR server with linux embeded. It has some telnet functions but i don't have the password for it (the chinese manufacturer refuses to give me the password). I did get a upgrade folder from them and found a passwd file inside.       So i assume that when i upgrade the firmware the password in that file will be used.       Now i am trying to modify the file so taht i can insert a password i already know.       The problem is that i don't know how to create the password hash from what i figured the password hash is $1$1/lfbDKX$Hmd.FqzB8IZEohPesYi961       The file is named rom.ko and i found a command telnetd /mnt/yaffs/web/boa -c /mnt/yaffs/web & /bin/cp -f /mnt/yaffs/rom.ko /etc/shadow in a script file so i assume this is the right way.       Can you help me reconstruct a password that i know already? Tell me how or make one for me :) ?... passwd file: root:$1$1/lfbDKX$Hmd.FqzB8IZEohPesYi961:0:0:99999:7:-1:-1:33637592 bin::10897:0:99999:7::: daemon::10897:0:99999:7::: adm::10897:0:99999:7::: lp::10897:0:99999:7::: sync::10897:0:99999:7::: shutdown::10897:0:99999:7::: halt::10897:0:99999:7::: mail::10897:0:99999:7::: news::10897:0:99999:7::: uucp::10897:0:99999:7::: operator::10897:0:99999:7::: games::10897:0:99999:7::: gopher::10897:0:99999:7::: ftp::10897:0:99999:7::: nobody::10897:0:99999:7::: next::11702:0:99999:7:::

    Read the article

  • Debootstrap Ubuntu over NFS leads to mknod I/O error

    - by Aaron B. Russell
    Hi everyone, I'm trying to prepare an Ubuntu environment for a diskless machine that will PXE boot and mount an NFS share as it's root. I've currently got another Ubuntu machine mounting the NFS share and I'm trying to debootstrap into it, but it has trouble creating devices over NFS: root@kimiko:~# mount | grep Seiuchi 192.168.0.203:/mnt/user/Seiuchi on /mnt type nfs (rw,addr=192.168.0.203) root@kimiko:~# debootstrap --arch i386 maverick /mnt http://gb.archive.ubuntu.com/ubuntu/ mknod: `/mnt/test-dev-null': Input/output error E: Cannot install into target '/mnt' mounted with noexec or nodev My NFS rule on the unRAID server is 192.168.0.201/32(rw,no_root_squash,sync). I don't have the noexec or nodev options set. I've not got much experience with NFS, so I'm probably missing something basic in the way I'm sharing this, but my attempts at Googling for an answer isn't really turning anything useful up. Does anyone have suggestions on what I might have missed or maybe relevant docs? Edit: Creating normal files (and directories) works just fine, I just can't create devices... root@kimiko:/mnt# mkdir foo root@kimiko:/mnt# cd foo root@kimiko:/mnt/foo# touch bar root@kimiko:/mnt/foo# mknod quux c 4 64 mknod: `quux': Input/output error root@kimiko:/mnt/foo# ls bar

    Read the article

  • gentoo zlib removed = portage corrupted

    - by Shamanu4
    Hello I'm using gentoo not for a long time and have made such mistake: I've removed zlib package from system. Now i've got my portage system corrupted: # emerge --sync Traceback (most recent call last): File "/usr/bin/emerge", line 36, in <module> from _emerge.main import emerge_main File "/usr/lib64/portage/pym/_emerge/main.py", line 41, in <module> from _emerge.actions import action_config, action_sync, action_metadata, \ File "/usr/lib64/portage/pym/_emerge/actions.py", line 44, in <module> from _emerge.depgraph import backtrack_depgraph, depgraph, resume_depgraph File "/usr/lib64/portage/pym/_emerge/depgraph.py", line 40, in <module> from _emerge.FakeVartree import FakeVartree File "/usr/lib64/portage/pym/_emerge/FakeVartree.py", line 11, in <module> from portage.dbapi.vartree import vartree File "/usr/lib64/portage/pym/portage/dbapi/vartree.py", line 56, in <module> import re, shutil, stat, errno, copy, subprocess File "/usr/lib64/python2.6/subprocess.py", line 430, in <module> import pickle File "/usr/lib64/python2.6/pickle.py", line 1258, in <module> import binascii as _binascii ImportError: libz.so.1: cannot open shared object file: No such file or directory How can I reinstall zlip package and repair system?

    Read the article

  • Problem with Windows Service and network printers.

    - by Mohammadreza
    I have a Windows Service application that every now and then should print some documents. As far as I know, to print those documents, my service must be run with a user account other than Local Service or Network Service. So i have created a user account and added that to the Administrators group and ran the service with it. With locally installed printers, I don't have any problems because those printers are automatically installed for all accounts. To be able to print with the network printers, I have created another application that syncs the installed printers of the currently logged in user with the user account that my service uses with the rundll32.exe printui.dll,PrintUIEntry command. In Vista and Windows7 I don't have any problems with the syncing of the printers because every time that a printer should be installed the authentication window will open and it asks for the appropriate user account to install that printer (The service user account is not created on the network printers computers) but in XP a find dialog with the "Connecting to {printername}" caption will appear and stops responding, or sometimes it installs the printer but every time service tries to print, a Win32Exception with "A StartDocPrinter call was not issued" message will throw and in the user account that runs the sync application a duplicate printer will be shown and I couldn't delete those printers unless with force (using registry). Am I doing the right thing for printing documents with Windows Services at all? If yes, how can I solve the above-mentioned problem? And if not, what the heck should I do? Thanks.

    Read the article

  • Alternative software for Pinnacle PCTV 100e

    - by Stijn Sanders
    I have a Pinnacle PCTV 100e external USB cable television receiver. I've been using Pinnacle's software that came with the card (TVCenter Pro) to record things at given times. Things I don't like is an extremely high CPU load, and that it doesn't seem to halt the screensaver from running when watching in full screen. Also, I was away the last two weeks, and the schedules went terribly bust. Some items were recorded hours before or after the actual scheduled time (and now I missed some shows), and some recurring schedules weren't converted into the next occurrence correctly! Is there good alternative software that would work with my PCTV 100e? (Preferalby cheap or free) I've tried VLC Player, which gets video, but no audio. I've tried MediaPortal, which crashes when trying to scan for channels. When I select a channel manually, the stored mpg has big errors in encoding and is also missing audio. There's VirtualDub, but that doesn't have ready-made scheduled-recording options. This I can conjure some scheduled scripts for, but I've noticed the sync gets awfully wrong after some time. I've tried Windows Media Center, but it doesn't seem to support the PCTV 100e.

    Read the article

  • Accessing Virtual Host from outside LAN

    - by Ray
    I'm setting up a web development platform that makes things as easy as possible to write and test all code on my local machine, and sync this with my web server. I setup several virtual hosts so that I can access my projects by typing in "project" instead of "localhost/project" as the URL. I also want to set this up so that I can access my projects from any network. I signed up for a DYNDNS URL that points to my computer's IP address. This worked great from anywhere before I setup the virtual hosts. Now when I try to access my projects by typing in my DYNDNS URL, I get the 403 Forbidden Error message, "You don't have permission to access / on this server." To setup my virtual hosts, I edited two files - hosts in the system32/drivers/etc folder, and httpd-vhosts.conf in the Apache folder of my WAMP installation. In the hosts file, I simply added the server name to associate with 127.0.0.1. I added the following to the http-vhosts.conf file: <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot "c:/wamp/www/ladybug" ServerName ladybug ErrorLog "logs/your_own-error.log" CustomLog "logs/your_own-access.log" common </VirtualHost> <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot "c:/wamp/www" ServerName localhost ErrorLog "logs/localhost-error.log" CustomLog "logs/localhost-access.log" common </VirtualHost> Any idea why I can't access my projects from typing in my DYNDNS URL? Also, is it possible to setup virtual hosts so that when I type in http://projects from a random computer outside of my network, I access url.dyndns.info/projects (a.k.a. my WAMP projects on my home computer)? Help is much appreciated, thanks!

    Read the article

  • switching dns server providers

    - by Yoav Aner
    I'm trying to wrap my head around something that I thought I kinda understood, but clearly there's some piece missing. We're currently using Zerigo as our primary dns, with slave dns running on linode. This works quite well. However, recent DDOS attacks on zerigo meant that whilst dns queries were still resolved, we were unable to make any dns changes. Since we rely on dns changes on our own infrastructure, I'm looking to improve this somehow. I'd rather not ditch zerigo completely, and realise that this or similar problems can happen with ANY primary dns hosting provider. It might not be DDOS, but a bug on their server, or something that means we can no longer issue updates. For this I want to have some fallback option: a completely independent (primary) dns provider (maybe AWS), which we will keep in-sync manually. We will switch-over to it when there's a problem. This brings me to my question: How do I make sure we can switch those providers quickly enough? specifically, on our registrar, there's a list of name servers, but no settings like TTL etc. How do dns clients know to use the newly updated name server records? Is this configured in the SOA? However, the SOA itself is hosted with the dns provider and we might not be able to update it... This is not a question about a one-time move, which can be planned and scheduled and tested, but rather to be able to do so when things are half-broken.

    Read the article

  • How can I mount dd image of a partition?

    - by Puneet Arora
    I created a dd image of a partition (containing an HFS+ FS) of one of my disks (and not the entire disk) a few days ago using the following command - dd conv=sync,noerror bs=8k if=/dev/sdc2 of=/path/to/img How can I mount it? I tried the following but it doesn't work - mount -o loop,ro -t hfsplus /path/to/img /path/to/mntDir It gives me mount: wrong fs type, bad option, bad superblock on /dev/loop1, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so and dmesg | tail gives me - [5248455.568479] hfs: invalid secondary volume header [5248455.568494] hfs: unable to find HFS+ superblock [5248462.674836] hfs: invalid secondary volume header [5248462.674843] hfs: unable to find HFS+ superblock [5248550.672105] hfs: invalid secondary volume header [5248550.672115] hfs: unable to find HFS+ superblock [5248993.612026] hfs: unable to find HFS+ superblock [5248998.103385] hfs: unable to find HFS+ superblock [5249031.441359] hfs: unable to find HFS+ superblock [5249036.274864] hfs: unable to find HFS+ superblock Is there something wrong that I am doing? I tried searching on how to do this but all the results I get only talk about mounting a partition from within a full disk image, using the offset option with mount - none talk about the case where the image itself is that of a partition. Thanks. PS: I'm running 64bit Arch Linux, and the partition from the original disk /dev/sdc2 mounts fine.

    Read the article

< Previous Page | 141 142 143 144 145 146 147 148 149 150 151 152  | Next Page >