Search Results

Search found 31989 results on 1280 pages for 'get method'.

Page 679/1280 | < Previous Page | 675 676 677 678 679 680 681 682 683 684 685 686  | Next Page >

  • Mount TMPFS instead of ro /dev

    - by schiggn
    I am working on a ARM-Based embedded system with a custom Debian Linux based on kernel 2.6.31. In the final system, the Root file system is stored as squashfs on flash. Now, the folder /dev is created by udev, but since there is no hot plugging functionality needed and booting time is critical, I wanted to delete udev and "hard code" the /dev folder (read here, page 5). because i still need to change parameters of the devices (with ioctl /sysfs) this does not work for me in this case. so i thought of mounting a tmpfs on /dev and change the parameters there. is this possible? and how to do best? my approach would be: delete /dev from RFS create tar containing basic devices mount tmpfs /dev untar tar-file into /dev change parameters Could this work? Do you see any problems? I found out, that you can mount on top of already mounted mount point, is it somehow possible just to take data with while mounting the new file system? if so that would be very convenient! Thanks Update: I just tried that out, but I'm stuck at a certain point. I packed all my devices into devices.tar, packed it into /usr of my squashfs and added the following lines to mountkernfs.sh, which is executed right after INIT. #mount /dev on tmpfs echo -n "Mounting /dev on tmpfs..." mount -o size=5M,mode=0755 -t tmpfs tmpfs /dev mknod -m 600 /dev/console c 5 1 mknod -m 600 /dev/null c 1 3 echo "done." echo -n "Populating /dev..." tar -xf /usr/devices.tar -C /dev echo "done." This works fine on the version over NFS, if I place printf's in the code, I can see it executing, if I comment out the extracting part, its complaining about missing devices. Booting OK mmc0: new high speed SDHC card at address 0007 mmcblk0: mmc0:0007 SD04G 3.67 GiB mmcblk0: p1 IP-Config: Unable to set interface netmask (-22). Looking up port of RPC 100003/2 on 192.168.1.234 Looking up port of RPC 100005/1 on 192.168.1.234 VFS: Mounted root (nfs filesystem) on device 0:14. Freeing init memory: 136K INIT: version 2.86 booting Mounting /dev on tmpfs...done. Populating /dev...done. Initializing /var...done. Setting the system clock. System Clock set to: Thu Sep 13 11:26:23 UTC 2012. INIT: Entering runlevel: 2 UBI: attaching mtd8 to ubi0 Commenting out the extraction of the tar mmc0: new high speed SDHC card at address 0007 mmcblk0: mmc0:0007 SD04G 3.67 GiB mmcblk0: p1 IP-Config: Unable to set interface netmask (-22). Looking up port of RPC 100003/2 on 192.168.1.234 Looking up port of RPC 100005/1 on 192.168.1.234 VFS: Mounted root (nfs filesystem) on device 0:14. Freeing init memory: 136K INIT: version 2.86 booting Mounting /dev on tmpfs...done. Populating /dev...done. Initializing /var...done. Setting the system clock. Cannot access the Hardware Clock via any known method. Use the --debug option to see the details of our search for an access method. Unable to set System Clock to: Thu Sep 13 12:24:00 UTC 2012 ... (warning). INIT: Entering runlevel: 2 libubi: error!: cannot open "/dev/ubi_ctrl" So far so good. But if I pack the whole story into a squashfs and boot from there, it is acting strange. It's telling me while booting that it is unable to open an initial console and its throwing errors on mounting the UBIFS devices, but finally provides a login anyway. Over that my echo's are not executed. If I then log in, /dev is mounted as TMPFS as desired and all the devices reside inside. When I redo the "mount" command to mount the UBIFS partitions it is executed whitout problem and useable. From squashfs VFS: Mounted root (squashfs filesystem) readonly on device 31:15. Freeing init memory: 136K Warning: unable to open an initial console. mmc0: new high speed SDHC card at address 0007 mmcblk0: mmc0:0007 SD04G 3.67 GiB mmcblk0: p1 UBIFS error (pid 484): ubifs_get_sb: cannot open "ubi1_0", error -19 Additionally, a part of the rest of the bootscripts is still exexuted, but not all of them. Does anyone has a clue why? Other question, is 5MB enough/too much for /dev?

    Read the article

  • Automating and deploying new linux servers

    - by luckytaxi
    I'm in the process of developing a method to automate new virtual machines into my environment. 90% of our machines are virtual but the process is similar for both physical and vmware based images. What I do now is I use cobbler to install the base OS. The kickstart script has post hooks to modify the yum repo and installs puppet and func. Once the servers are running, I manually add them into nagios and sign the certificate via the puppetmaster. I've since migrated most of the resources to use mysql as the backend. I wanted to see what others are doing and my goal for 2011 is to have puppet inventory the hardware into mysql, and somehow i'll script a python script to have nagios grab the info and automatically add it for monitoring purposes. It's kind of tedious to have to add each new server into nagios, puppet's dashboard, munin, etc...

    Read the article

  • Recycle Bin for Windows Server 2003 File Shares

    - by Joseph Sturtevant
    One of the networks I administrate uses Windows Server 2003 File Shares to provide network storage for users. To prevent against accidental deletion, I use Shadow Copies to create snapshots twice a day. This method is only effective, however, for files which were on the share during the last snapshot. When users accidentally deleted files recently placed on the share, I have no recourse except to remote desktop into the server and attempt retrieval with an undelete utility (this is only effective if the file has not been overwritten). Is there a feature like the Windows Recycle Bin for Windows Server 2003 File Shares? What is the best way to protect my users against accidental file deletion in this scenario?

    Read the article

  • Skype companywide global contacts list

    - by Martin
    We are a medium sized company based across several sites and with a number of home workers. We have more or less settled on Skype as our defacto method of communication. At the moment the only pain is ensuring that everybody has all the other employees added to their contact list. Can be a real pain when a new employee starts and they have to send details to everyone else and vice versa. Is there a solution that allows us to manage a central contacts list that we can push out to new/existing users?

    Read the article

  • Skype companywide global contacts list

    - by Martin
    We are a medium sized company based across several sites and with a number of home workers. We have more or less settled on Skype as our defacto method of communication. At the moment the only pain is ensuring that everybody has all the other employees added to their contact list. Can be a real pain when a new employee starts and they have to send details to everyone else and vice versa. Is there a solution that allows us to manage a central contacts list that we can push out to new/existing users?

    Read the article

  • Traceroutes to every site includes nameintelligence.com

    - by Cyclone
    I used domaintools.com to do a traceroute on a bunch of sites, and noticed that every single one leads to this "nameintelligence.com" site that I have never heard of. Absolutely every site, including this one, google, my own site, yahoo, microsoft.com, stackoverflow, EVERYTHING, has nameintelligence.com in the first position. What is that site, and what do they do? They're a PR 4 apparently, yet I have never heard of them. I think this would be the right place to ask, I am sorry if I am wrong. Here is the traceroute for google: http://dns-tools.domaintools.com/ip-tools/?method=traceroute&query=74.125.53.99

    Read the article

  • Exchange 2003 - Keep user's mailbox but disable account and prevent new emails

    - by molecule
    Hi all, Just wanted to know what's your take on this... A user has left the company but may return in future. I would like to disable his AD account, archive all his emails, keep his mailbox and prevent new emails from being sent to him. What's the "best practice" method of doing this? Please enlighten and thanks in advance. What I would do: Reset AD password Change SMTP address - leading to NDRs if new emails are sent to his/her previous address Logon as him/her and archive emails Disable AD account Hide address from GAL

    Read the article

  • Setting up dante socks server

    - by skerit
    I want to tunnel all my internet traffic through my vps, so I'm trying to install a proxy server. However: I can't seem to browse the internet through Dante. I get the ERR_EMPTY_RESPONSE error. This is my config: logoutput: stderr /home/user/dantelog internal: eth1 port=1080 external: eth1 method: username pam user.privileged: proxy user.notprivileged: nobody user.libwrap: nobody client pass { from: 10.0.0.0/8 port 1-65535 to: 0.0.0.0/0 } Do I really have to run 2 proxy servers: one for http and one for socks? or is there something else I can do?

    Read the article

  • Preventing a confirmation pop-up when updating fields in Word

    - by Gilles
    In Word 2007, an obvious candidate for updating all the fields in a range is myrange.Fields.Update But if the range is the element of ActiveDocument.StoryRanges corresponding to the footnotes, endnotes or comments, this triggers a confirmation pop-up “Word cannot undo this action. Do you want to continue?” What is this pop-up telling me about? How do I get rid of it (if it's not important)? An obvious workaround is to iterate over the fields and call each field's Update method. It doesn't fire up that question. But if I do this, what do I miss? following up to How do I update all fields in a Word document

    Read the article

  • Save Website To Disk

    - by Christian
    Hello everyone! I have a very poor internet connection when I'm living at home. The only time I have a good internet is at college. When I get home, the most mundane task like opening a web-page becomes a five minute stress-test. So what I was thinking was to download the web-page, for example superdickery. I was wondering what the best method would be to download the entire image archive of the page? Would this be illegal, if I did this? It's just that I don't want to be frustrated every time I just want to load a simple jpeg image.

    Read the article

  • proxy.pac file performance optimization

    - by Tuinslak
    I reroute certain websites through a proxy with a proxy.pac file. It basically looks like this: if (shExpMatch(host, "www.youtube.com")) { return "PROXY proxy.domain.tld:8080; DIRECT" } if (shExpMatch(host, "youtube.com")) { return "PROXY proxy.domain.tld:8080; DIRECT" } At the moment about 125 sites are rerouted using this method. However, I plan on adding quite a few more domains to it, and I'm guessing it will eventually be a list of 500-1000 domains. It's important to not reroute all traffic through the proxy. What's the best way to keep this file optimized, performance-wise ? Thanks

    Read the article

  • Dovecot ignoring maximum number of IMAP connections

    - by Michelle
    I have a single mailbox mail server running Dovecot/Postfix and I have two IMAP clients, Thunderbird on the PC and K9 on Android. I keep on receiving this error in my logs even after I change the 'mail_max_userip_connections' variable to 50. puppet dovecot: imap-login: Maximum number of connections from user+IP exceeded (mail_max_userip_connections=10): user=<[email protected]>, method=PLAIN, rip=62.242.90.2, lip=198.29.31.229, TLS Why does it say that it is set to 10 in the log? Is that hardcoded? grep -r "mail_max_userip_connections" /etc/dovecot /etc/dovecot/conf.d/20-managesieve.conf: #mail_max_userip_connections = 10 /etc/dovecot/conf.d/20-pop3.conf: #mail_max_userip_connections = 3 /etc/dovecot/conf.d/20-imap.conf: mail_max_userip_connections = 50 I've restarted dovecot after making the changes but this error is still logged and I can't access the mailbox. Can anyone help me understand why I can't seem to raise the maximum limit?

    Read the article

  • Dropbox takes hours? to sync & shows diff. modified times (coincidentially in the future)

    - by user10580
    Dropbox is taking hours to sync, I can't tell exactly how long because the time stamps on the website make no sense - they say the files were modified. . . tomorrow. Actually my netbook (windows xp) says they're last modified tomorrow in windows explorer as well. It's bizarre. The time and date on both computers are correct. The files in question are in a symlinked directory on the laptop (which are synced fine, with the correct timestamps). I have looked for an option to force dropbox to sync, but haven't located one. (There might be a command line method, but I haven't had the time to explore). thanks

    Read the article

  • Bandwidth preserving browsing mode

    - by Elazar Leibovich
    I'm looking at some methods to browse the web, in situations where bandwidth is scarce (such as, flaky wifi connection, or mobile phone internet provider who overcharges the bandwidth). One thing that would save alot of bandwidth is not downloading images while browsing. This approach has two main drawbacks Sometimes a site's layout depends of images. There are some images you wish to see (thus disabling images downloading through firefox settings is not quite convenient). I'm looking therefor for a method that would allow me to Use some heuristic to find out which images are related to the website layout and allow them to be downloaded. Select a particular image from a website, download and display it. Maybe there's a firefox extension for that?

    Read the article

  • Seeing a pre-logon app's GUI after logon (or ever)

    - by JimB
    I'm looking for either a method to achieve this, or a clear reason why it's not possible. I use Scheduled Tasks to start an app with a GUI at system startup. I want to see that GUI's screen after logon without restarting it. I'm willing to type a password and/or re-logon and/or or use whatever app or tool to help, including changing the way I run the GUI app. It just can't wait for a user logon to start. How do I do it? Or if it's absolutely impossible, why? I've read about "Shatter attacks" but that doesn't seem to cover this. I'm most interested in XP and Windows7. If multiple solutions exist, of course I'd prefer the most convenient, flexible and/or open source.

    Read the article

  • Multi serial devices on one port

    - by adopilot
    I am developing one software for managing parking lot. System is designed to use four HID RFID devices for authorization clients on gates. Each device should be connected to server by Serial port RS232. Now I wondering is there some device or method to join all that devices on one device and after that to I have only one RS232 cable connected to server. My problem is in that RFID card readers are simple (stupid) devices and there I am not able to change anything, but Ill like to know from which specific device is coming input. What Ill want to do with that new device which need to join four readers is to add suffix or prefix to string that sent from RFID card reader. So I can identify device that is sending card number to my server. I am developing my system on Win platforms.

    Read the article

  • Remove LCD Stand for Wall Mounting - FSM-270YG

    - by Benjamin Chambers
    Based on Jeff Atwood's post on Coding Horror, I ordered one of these monitors, and I've been absolutely loving it. However, I recently (i.e. today) took the next step in monitor-y goodness and fastened the sucker to an articulated wall mount. Unfortunately, I can't figure out how to remove the stand. The flat portion comes off with a single screw, but the leg it fastens to has no apparent method of removing it. Has anyone figured out a trick for removing these, so they don't just stick out below the screen? Should I remove the screws from the backside of the screen, and look for an internal connection to remove? Or just give up and live with it? (After all, it's a great display, it's floating in the air in front of me, and the stand leg is only a minor annoyance).

    Read the article

  • How do I expose a webapp on :8090, even though firewall allows only :80 and :22

    - by Kaustubh P
    I am a noob in Server related stuff, so bear me. I use amazon webservices (EC2) on which I have a webapp running on jetty, which runs on port 8090. I deploy the webapp through the usual method of java -jar start.jar So then to access the app, I have to add a port in the URL, like this: someIP:8090/app But just typing someIP in the browser takes me to a page that shows It works! This is the default web page for this server. The web server software is running but no content has been added, yet. which I assume is apache. I have apache, tomcat and jetty installed. What can I do so that I dont have to specify the port? Do I have to perform port-forwarding? Thanks a lot.

    Read the article

  • OpenLDAP PAM authen does not support SSHA on FreeBSD10

    - by suker200
    OpenLDAP PAM authen does not support SSHA? Hi everyone, Now, I lost one day to figure out, the reason my FreeBSD10 can not authenticate SSH user via LDAP because pam_ldap and nss_ldap do not support SSHA password when OpenLDAP support SSHA method. I have checked /usr/local/etc/ldap.conf, they just have these pam_password methods: clear, crypt, nds, racf, ad, exop. So, If I switch to CRYPT, I can authenticate successfully. So, IMHO, I will be very appreciative for any point or suggestion from everyone to make my FreeBSD10 PAM support SSHA, is there any way or can not? Infor: Ldap Server (389 DS - Centos) - Ldap client (FreeBSD10) what I have got: authen via Ldap between Centos - Centos (Okie). Centos (Ldap Server) - FreeBSD failed (work if I using crypt) Thank and BR Suker200

    Read the article

  • Redmine + Backlogs not working on Turnkey Linux (Ubuntu)

    - by Riddler
    I'm trying to get Redmine + Backlogs work, so for starters I took a virtual appliance with Redmine from Turnkey Linux (http://www.turnkeylinux.org/redmine) and installed Backlogs on top of it, following the installation instructions (http://www.redminebacklogs.net/en/installation/ - used method #2). It seems to have installed ok, but when I go to the "Backlogs" tab and attempt to create some stories, this is what I get - first shows some kind of error/warning icon, others continue to display "in progress" icon indefinitely (can't post a screenshot, unfortunately, but you can take a look at it here: http://www.redmine.org/attachments/5329/Backlogs.jpg). None of the stories get actually created - leaving this tab and returning back to it shows empty backlogs. So.. what am I doing wrong, and how to fix this?

    Read the article

  • SSH broken after homedir permissions and hostname change on EC2-hosted Ubuntu

    - by dimadima
    I changed my instance's hostname using the hostname utility and then set it in /etc/hostname so that the new name survives reboot. My main motivation was for differentiating between instances at the prompt using the \h format in PS1. EDIT I also changed permissions on my home directory. I made my home directory group writeable. Now I can no longer SSH into the machine. The short of it is the error Permission denied (publickey). Running ssh -v, the more verbose output is: debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Offering RSA public key: /Users/dmitry/.ssh/id_rsa debug1: Authentications that can continue: publickey debug1: Trying private key: /Users/dmitry/.ssh/ec2key.pem debug1: read PEM private key done: type RSA debug1: Authentications that can continue: publickey debug1: No more authentication methods to try. Permission denied (publickey). Should I have done something after changing the hostname? Now I can't get into the instance! :(

    Read the article

  • Debian 6 Internet connection sharing aka IP masquerade not working

    - by Rautamiekka
    The problem: the computers [Xbox 360 and a Kubuntu 12.04.1 laptop] can't access Internet through a recently-installed desktopless Debian 6 laptop (which is wirelessly connected to a WLAN station) but addresses are successfully given by dnsmasq. The attempts: 1.1) /etc/dnsmaq.conf conffed according to http://wiki.debian.org/HowTo/dnsmasq: add lines interface=eth0 dhcp-range=192.168.0.50,192.168.0.150,255.255.255.0,12h 1.2) Follow http://www.cyberciti.biz/faq/rhel-fedora-linux-internet-connection-sharing-howto/ and use their script to setup iptables. 2) Follow the Ubuntu Internet Gateway Method (iptables) at https://help.ubuntu.com/community/Internet/ConnectionSharing recommended and which worked at Share internet in Linux. The Debian laptop was rebooted many times and between each attempt, with and without the script auto-executing via /etc/rc.local. While adding the iptables-restore command to that file I disabled the script.

    Read the article

  • Reboot VPS by reaching memory limit

    - by Ali
    When a server uses memory more than available RAM, the system will shut down the virtual machine. Then, it is only possible to boot from outside (VPS control panel, e.g. vePortal or SolusVM). However, it should be possible to plan a reboot before possible shut down. What is the best practical method to check the used memory, and reboot the system upon reaching e.g. 90% of the allowed RAM? Is there a common program or script to do so? I am using Debian/Ubuntu.

    Read the article

  • Remote Yum mirror

    - by specto
    I have a bunch of remote computers that must be updated to the most recent packages for RedHat 4 and RedHat 5. I am using mrepo to mirror the RHN packages, however the remote computers do not have an internet connection. Because of this I have to update the mirror server that is part of the remote computers with a dvd. This is to cut down shipping costs to just a dvd. I am attempting to script this so I can fit all of the new packages on a CD or a DVD. I send updates about once or twice a month depending on package requirements. So my question is, is their a good method to do this so that the only things transferred are the new packages? I wish I could just use rsync. Thanks.

    Read the article

  • Join .doc files into one .doc (with keeping the original format of every document)

    - by Shiki
    I have about ~50 .doc files, that look perfect (they are extracted with Able2Extract). Now I want to join these 50 files into one huge .doc. I've tried using Word's in-built "Insert" feature, but that messed up the whole format. I want to keep everything I have. Like just document1 - document2 - document3. Nothing "intelligent" or "smart" needed during the conversion, just the capability of joining them. (Thus making them all searchable, that's the ultimate aim.) I don't mind if the method/solution applies a single blank page at every document end either.

    Read the article

< Previous Page | 675 676 677 678 679 680 681 682 683 684 685 686  | Next Page >