Search Results

Search found 5262 results on 211 pages for 'commands'.

Page 165/211 | < Previous Page | 161 162 163 164 165 166 167 168 169 170 171 172  | Next Page >

  • Sudden problems with iptables not running

    - by Fourjays
    I've got a sudden issue with iptables not running on my CentOS 5.8/DirectAdmin XenVPS. All I have done today is install PHP APC and run an update (although I admittedly didn't pay much attention today - I usually do). Iptables has been running fairly smoothly since I installed it over 6 months ago. Basically when I try to run iptables -L it tells me: iptables v1.3.5: can't initialize iptables table `filter': iptables who? (do you need to insmod?) Perhaps iptables or your kernel needs to be upgraded. I've looked around and tried a few things and it appears that maybe my kernel doesn't have the modules loaded? I've been reading this and tried the two commands they suggest to no avail. Except there does appear to be a mismatch on one bit of output: -bash-3.2# cd /lib/modules -bash-3.2# ls 2.6.18-194.32.1.el5xen 2.6.18-238.5.1.el5xen 2.6.18-274.7.1.el5xen 2.6.39.1-cs-domU 2.6.18-238.12.1.el5xen 2.6.18-238.9.1.el5xen 2.6.37.2-cs-domU 3.0.1-cs-domU -bash-3.2# depmod -a WARNING: Couldn't open directory /lib/modules/2.6.18-274.18.1.el5xen: No such file or directory FATAL: Could not open /lib/modules/2.6.18-274.18.1.el5xen/modules.dep.temp for writing: No such file or directory Does this mean the versions are out of sync? If so, what are my next steps to getting this fixed? As you can probably tell I am still learning how to manage my server so please be very clear in all advice. Many thanks :)

    Read the article

  • Moving hidden files/folders with the command-line or batch-file

    - by Synetech
    Question Does anyone know of a way to move files and folders that have the hidden, system, or read-only attribute set from the command-line or a batch file? (No, stripping the attributes first is not an option since there is no practical way to know which attributes were set in order to re-set them after the move.) (Failed) Attempts Using the basic move command does not work with items with the hidden or system attribute set and for some reason, it does not have switches to specify attributes like the dir and del commands do. I tried using a utility I wrote that uses the shell’s file operation function, but that requires using start /w to prevent the batch file from running on ahead, and it complains about long-filename support for some reason. I tried using robocopy, but it first copies the files and then deletes the originals instead of simply moving the source (which results in a frustrating delay, even with the excessive output redirected to nul). (Surprisingly it seems that few people have ever needed to move hidden files from the command-line. All I could find was this one person who abandoned the attempt.)

    Read the article

  • Win7 - Opening "Programs and Features" as Admin from command line (logged in as regular user)

    - by user1741264
    We have Win7 machines on a domain that we'd like to open the "Programs and Features" control applet via the command line while a regular user is logged in. Heres the catch: I know how to do this using runas from command line BUT after "Programs and Features" opens, I dont truly have the ability to remove a program. I am told that I need to be an Admin to do so. Here are the commands I have tried: runas /user:%computername%\administrator cmd.exe then in the new cmd window running: control appwiz.cpl runas /user:%companydomain%\%domainadminacct% cmd.exe then in the new cmd window running: control appwiz.cpl runas /user:%computername%\administrator cmd.exe then in the new cmd window running: rundll32.exe shell32.dll,Control_RunDLL appwiz.cpl runas /user:%companydomain%\%domainadminacct% cmd.exe then in the new cmd window running: rundll32.exe shell32.dll,Control_RunDLL appwiz.cpl I have also tried all of the above as one long line of code instead of launching a cmd.exe as Admin As you can see, I have tried running the command using both a local admin account (Administrator) AND a domain admin account. I have alos tried launching the runas command as one long command (opening the "programs and features") AND 1st launching a cmd.exe with admin rights and THEN launching the "Prgrams and Features" window. The result is the same: The "Programs and Features" windows opens but when I try to perform an uninstall, I am told I need Admin rights. Thus I am lead to believe that this instance of "Programs and Features" is not truly being run as an admin I am trying to avoid logging the regular user out. I am also aware that every program has its own uninstaller, I do not want to uninstall that way. I want to use the uninstaller in "Programs and Features". Any help is appreciated.

    Read the article

  • How to migrate Fedora DS (389 DS) to a new machine?

    - by zengr
    Hello, I am trying to migrate a Fedora DS (1.2.2) to a new server (1.2.7.5). The process has been painful to say the least. The old server (1.2.2) was also an upgrade from an old fedora DS setup, so it does not contain migrate-ds-admin.pl. I found this question, but the URL does not open. I am aware that I need to use migrate-ds-admin.pl, but I am clueless. How do I use it? I assume this works like this: 1. Copy migrate-ds-admin.pl from server which has 1.2.7 to 1.2.2 2. Run migrate-ds-admin.pl to export the schema+ldif from 1.2.2 3. Import the schema+ldif to 1.2.7 using migrate-ds-admin.pl. If the above is true, then what parameters are need for export and import? Note: ./ldif2db -n NetscapeRoot -i /root/NetscapeRoot.ldif ./ldif2db -n userRoot -i /root/userRoot.ldif The above two commands work like a charm, but since the schema (custom schema) is not migrated, I see alot of errors during import.

    Read the article

  • TLS_REQCERT and PHP with LDAPS

    - by John
    Problem: Secure LDAP queries via command-line and PHP to an AD domain controller with a self-signed certificate. Background: I am working on a project where I need to enable LDAP look-ups from a PHP web application to a MS AD domain controller that is using a self-signed certificate. This self-signed certificate is also using a domain name that is not a FQDN - think of something like people.campus as the domain name. The web application would take the user's credentials and pass them on to the AD domain controller to verify if the credntials are a match or not. This seems simple, but I am having problems trying to get PHP and the self-signed certificate to work. Some people have suggested that I changed the TLS_REQCERT variable from "request" to "never" within the OpenLDAP configuration. I am concerned that this might have larger implications such as a man-in-the-middle attack and I am not comfortable changing this setting to never. I have also read some places online where one can take a certificate and place it as a trusted source within the openldap configuration file. I am curious if that is something that I could do for the situation that I have? Can I, from the command line, obtain the self-signed certificate that the AD domain controller is using, save it to a file, and then have openldap use that file for the trust that it needs so that I do not need to adjust the variable from request to never? I do not have access to the AD domain controller and as a result cannot export the certificate. If there is a way to obtain the certificate from the command line, what commands do I need to use? Is there an alternate method of handling this issue that would be better in the long run? I have some CentOS servers and some Ubuntu servers that I am working with to try and get this going on. Thanks in advance for your help and ideas.

    Read the article

  • Using Samba to share a folder from a Linux guest with a Windows host in VirtualBox

    - by AmV
    I would like to share a folder from a Linux Guest with a Windows host (with read and write access if possible) in VirtualBox. I read in these two links: here and here that it's possible to do this using Samba, but I am a little bit lost and I need more information on how to proceed. So far, I managed to set up two network adapters (one NAT and one host-only) and install Samba on the Linux guest, but now I have the following questions: What do I need to type in samba.conf to share a folder from the Linux guest? (the tutorial provided in one of the links above only explains how to share home directories) Are there any Samba commands that need to be executed on the guest to enable sharing? How do I make sure that these folders are only available to the host OS and not on the Internet? Once the Linux guest is setup, how do I access each of the individual shared folders from the Windows host? I read that I need to mount a drive on Windows to do this, but do I use Samba logins, or Linux logins, also do I use localhost? or do I need to set up an IP for this? Thanks!

    Read the article

  • Enable SSL with Jetty 8

    - by Jerec TheSith
    I received certificates from GoDaddy an I'm trying to enable SSL with Jetty but receive an error 107 SSL protocol error when connecting to https://server.com:8443 I generated the keystore using these commands : keytool -keystore keystore -import -alias gd_bundle -trustcacerts -file gd_bundle.crt keytool -keystore keystore -import -alias server.com -trustcacerts -file server.com.crt and placed it in /opt/jetty/etc/ And used the following configuration in jetty.xml : <Call name="addConnector"> <Arg> <New class="org.eclipse.jetty.server.ssl.SslSelectChannelConnector"> <Arg> <New class="org.eclipse.jetty.http.ssl.SslContextFactory"> <Set name="keyStore"><SystemProperty name="jetty.home" default="."/>/etc/keystore</Set> <Set name="keyStorePassword">**password1**</Set> <Set name="keyManagerPassword">**password1**</Set> <Set name="trustStore"><SystemProperty name="jetty.home" default="."/>/etc/keystore</Set> <Set name="trustStorePassword">**password1**</Set> </New> </Arg> <Set name="port">8443</Set> <Set name="maxIdleTime">30000</Set> <Set name="Acceptors">2</Set> <Set name="statsOn">false</Set> <Set name="lowResourcesConnections">20000</Set> <Set name="lowResourcesMaxIdleTime">5000</Set> </New> </Arg> </Call> Am I missing something in jetty's configuration ?

    Read the article

  • Connect trough remote computer connection

    - by Didac
    First, sorry for my english and my poor knowlodge of this subject. I have a dedicated server placed in Germany (windows 2008 R2) and I live in spain. I would like to access internet from my home computer (Windows 7 Pro x64), trough my server in Germany, so I can use a German IP, what I need some times. I have complete acces in to both computers, but I just don't know where to start. (My knwoledge is limited to software development :/ ) I'd like to know where to start, if I need to create a VPN and so.. Thanks in advance! Update 1 I tried a lot of options of OpenVPN, but I sadly I know nothing abuot networking, so I have to accept I do not know what I'm doing :( Here are my config files (note most of the options are from the sample config files). server.conf #server config file start port 1194 proto udp dev tun server 10.0.0.0 255.255.255.224 #you may choose any subnet. 10.0.0.x is used for this example. ca "C:\\Program Files (x86)\\OpenVPN\\easy-rsa\\keys\\ca.crt" cert "C:\\Program Files (x86)\\OpenVPN\\easy-rsa\\keys\\server.crt" key "C:\\Program Files (x86)\\OpenVPN\\easy-rsa\\keys\\server.key" dh "C:\\Program Files (x86)\\OpenVPN\\easy-rsa\\keys\\dh1024.pem" push "redirect-gateway def1" push "dhcp-option DNS 8.8.8.8" #the following commands are optional keepalive 10 120 comp-lzo persist-key persist-tun verb 5 #config file ends client.conf #client config file start client dev tun proto udp remote 176.9.99.180 1194 resolv-retry infinite nobind persist-key persist-tun ca "C:\\Program Files (x86)\\OpenVPN\\easy-rsa\\keys\\ca.crt" cert "C:\\Program Files (x86)\\OpenVPN\\easy-rsa\\keys\\client1.crt" key "C:\\Program Files (x86)\\OpenVPN\\easy-rsa\\keys\\client1.key" ns-cert-type server comp-lzo verb 5 explicit-exit-notify 2 ping 10 ping-restart 60 route-method exe route-delay 2 # end of client config file And here's the server's network settings: IP address: 176.9.99.180 Subnet mask: 255.255.255.224 Default gateway: 176.9.99.161 Preferred DNS server: 127.0.0.1

    Read the article

  • Mac updated just now, postgres now broken

    - by user52224
    I run postgres 9.1 / ruby 1.9.2 / rails 3.1.0 on a maxbook air for local dev. It's all been running smoothly for months, (though this is the first time I've done development on a mac.) It's a macbook air from last year, and today I got the mac osx software update message as I have a few times before, and my system downloaded approx 450mb of updates and restarted. It now says it's on OSX 10.7.3. Point is, postgres has stopped working, when I start my thin server (mirror heroku cedar) as normal, and then browse to my rails app I get: PG::Error could not connect to server: Permission denied Is the server running locally and accepting connections on Unix domain socket "/var/pgsql_socket/.s.PGSQL.5432"? What happened? After browsing around a few questions I'm still confused, but here's some extra info: Running psql from command line gives same error I can run pgadmin 3 and connect via it and run SQL no problems Running which psql shows the version as /usr/bin/psql I created a PostgreSQL user back when I got the mac (it's always been on lion) I've no idea why, almost certainly I was following a tutorial which I neglected to store in my notes. Point is I am aware there is a _postgres user as well. I know it's rubbish, but apart from a note on passwords, I don't have any extra info on how I configured postgres - though the obvious implication is that I did not use the _postgres user. Anyone have suggestions or information on what might have changed / what I can try to debug and fix? Thanks. Edit: Playing around based on this question and answer: http://stackoverflow.com/questions/7975414/check-status-of-postgresql-server-mac-os-x, see this string of commands: $ sudo su postgreSQL bash-3.2$ /Library/PostgreSQL/9.1/bin/pg_ctl start -D /Library/PostgreSQL/9.1/data pg_ctl: another server might be running; trying to start server anyway server starting bash-3.2$ 2012-04-08 19:03:39 GMT FATAL: lock file "postmaster.pid" already exists 2012-04-08 19:03:39 GMT HINT: Is another postmaster (PID 68) running in data directory "/Library/PostgreSQL/9.1/data"? bash-3.2$ exit

    Read the article

  • Internet in the router but not in the local network.

    - by TheMouse
    I have a PC and a laptop (Windows 7 - both) which should connect through router to Internet. The router is Linksys wrt120. My ISP is using PPPoE. I have connected the Internet cable to the router, clone the MAC of my PC, writing the username and the password for my Internet connection. After seconds the router has acquired IP from my ISP. I have used the administration panel of Linksys and with the help of the ping and tracert commands which are built into it - I can connect to the world, outside the network. The problem is when I try to connect the PC or the laptop to the network. There's no problem here. The DHCP server of the router gives them appropriate addresses. The problem is that they couldn't connect to either Internet addresses (google.com) or IP addresses. But they can connect to the router and its control panel. I tried several times, reset the router..but there's no Internet..still.

    Read the article

  • Is git-annex appropriate for my scenario?

    - by Karel Bílek
    I have a git repository with source codes I want to put in the open on github. However, I also have gigabytes of data that I don't want to have in the open and in the repo - they are big, they are proprietary, they are "burdened" with copyrights and so on. However, those are also logically "part of the same project" and I wish to have some control over their history (basically, what git already does). Right now, I have them in the directory "data" in the repository and I have the directory ignored and I resign on getting them to git. However, I have read about git-annex and it seems it can do what I want. So, I have two questions. Is git annex appropriate for me? How exactly should I use git annex for my scenario? Meaning - which commands should I use and how? I have tried to read the official documentation but it talks about use cases that I don't care about. I have the data on one computer only and I don't think I will be moving them soon (it's nice to have the possibility, but it's not why I want to use git annex). Also, the documentation is pretty hard to read.

    Read the article

  • How can I create a VLAN on my extreme switch for a separate subnet/domain?

    - by drpcken
    I'm putting together a small active directory implementation for a buddy of mine. I currently have 2 servers (one is the primary domain controller) and a couple clients. I need to test and run updates on every machine on this domain, but I would have plug them into my current LIVE domain to get it internet access. From what I've read having two separate domains on a single subnet is a bad idea (even though it is temporary) so I don't want to risk messing anything up on my production domain. I'm pretty sure I can create a separate VLAN on my extreme 48 port switch and plug this smaller domain into it on a different subnet, but I don't know the commands. Both subnets would need internet access of course (one of the things I can't wrap my head around is routing internet traffic between subnets (gateway is on production subnet). Switch is a Summit x450e-48p My production domain is on subnet 192.168.200.0. My new domain I want to put online would go into subnet 192.168.10.0. A shove in the right direction would be greatly appreciated. Thank you!

    Read the article

  • Rebuilding LVM after RAID recovery

    - by Xiong Chiamiov
    I have 4 disks RAID-5ed to create md0, and another 4 disks RAID-5ed to create md1. These are then combined via LVM to create one partition. There was a power outage while I was gone, and when I got back, it looked like one of the disks in md1 was out of sync - mdadm kept claiming that it only could find 3 of the 4 drives. The only thing I could do to get anything to happen was to use mdadm --create on those four disks, then let it rebuild the array. This seemed like a bad idea to me, but none of the stuff I had was critical (although it'd take a while to get it all back), and a thread somewhere claimed that this would fix things. If this trashed all of my data, then I suppose you can stop reading and just tell me that. After waiting four hours for the array to rebuild, md1 looked fine (I guess), but the lvm was complaining about not being able to find a device with the correct UUID, presumably because md1 changed UUIDs. I used the pvcreate and vgcfgrestore commands as documented here. Attempting to run an lvchange -a y on it, however, gives me a resume ioctl failed message. Is there any hope for me to recover my data, or have I completely mucked it up?

    Read the article

  • How to handle files that don't need version control in mercurial

    - by richardh
    I am new to mercurial, and for the most part do LaTeX reports and statistical calculations in R using .csv and/or .sqlite files. Re LaTeX, all I really care is the .tex file. Re R, I don't need version control on the .csv or .sqlite files because they are static. When I do 'hg add' for a repo with a .csv and/or .sqlite file, I get a warning like: rev2.sqlite: up to 3070 MB of RAM may be required to manage this file (use 'hg revert rev2.sqlite' to cancel pending addition) So I revert and subsequently use adds like hg add -X *.sqlite. I guess I really have two questions: (1) Should I ignore these warnings? Because these large files are static, can I just add to the repo knowing that the diff files will always be empty and not worry about wasted resources? (2) If I should keep excluding these files from the repo, is there away that I can fix this option? I.E., add to my .hgrc file something that always appends an option like -I *.tex -I *.R to my 'hg add' commands? Thanks!

    Read the article

  • Borked ubuntu uninstall - need to delete boot partition (i think)

    - by Max Williams
    I just got a new pc laptop with windows 7 and wanted to install Ubuntu on it. Which i did, no problem there, by downloading the installer, burning it to dvd then booting off the dvd and installing. Then, i realised that the new Ubuntu 12.04 uses the Unity desktop, which i immediately disliked, and after some research, began to hate. So, i decided (after a little googling) to install Linux Mint instead. So, thinking i'd better start from scratch, i went to the Windows 7 disk manager and wiped the Ubuntu partition that had been created. Now, when i start up, i get an error from grub, the ubuntu boot manager: error: unknown filesystem grub rescue> _ and a blinking cursor where i can enter commands. I suspect that what i've done is deleted the main ubuntu partition but NOT deleted another partition which is a boot partition, or something like that? Can anyone tell me how i can rescue or unbork this? I'd like to either a) get back to my original windows-only setup OR b) install linux mint off dvd (which i have), into the empty partition, fixing any grub confusion in the process. Any suggestions? Thanks, max BTW please don't answer if you're just going to tell me to stick with 12.04, or install a different distro or something. I definitely want Mint and just want to fix this mess - thanks :)

    Read the article

  • How to calibrate ASUS k52f battery on Ubuntu?

    - by cutalion
    I'm not sure if the problem is in software or my battery is dying, I'll move my question to another forum if it's not a SO question I have a problem with the battery on my laptop - ASUS K52F. It shows incorrect information about capacity. When I unplug the charger it can work some time, but then it will power off without any warnings. Sometimes it will power off right after I unplug the charger. Here is some info I could get: > uname -a Linux alligator 3.5.0-18-generic #29-Ubuntu SMP Fri Oct 19 10:26:51 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux > acpi -i Battery 0: Charging, 99%, 18:25:15 until charged Battery 0: design capacity 5235 mAh, last full capacity 69964 mAh = 100% > cat /sys/class/power_supply/BAT0/uevent POWER_SUPPLY_NAME=BAT0 POWER_SUPPLY_STATUS=Charging POWER_SUPPLY_PRESENT=1 POWER_SUPPLY_TECHNOLOGY=Li-ion POWER_SUPPLY_CYCLE_COUNT=0 POWER_SUPPLY_VOLTAGE_MIN_DESIGN=10800000 POWER_SUPPLY_VOLTAGE_NOW=9246000 POWER_SUPPLY_POWER_NOW=176000 POWER_SUPPLY_ENERGY_FULL_DESIGN=**48400000** POWER_SUPPLY_ENERGY_FULL=**646822000** POWER_SUPPLY_ENERGY_NOW=**643588000** POWER_SUPPLY_MODEL_NAME=K52F-44 POWER_SUPPLY_MANUFACTURER=ASUSTek POWER_SUPPLY_SERIAL_NUMBER= I noticed, that POWER_SUPPLY_ENERGY_NOW and POWER_SUPPLY_ENERGY_FULL are greater than POWER_SUPPLY_ENERGY_FULL_DESIGN. I don't think it's ok :) I can run any additional commands.

    Read the article

  • How can I "filter" postfix-generated bounce messages?

    - by Flimzy
    We are using postfix 2.7 and custom SMTPD (based on qpsmtpd) in highly customized configuration for spam filtering. We have a new requirement to filter postfix-generated bounces through our custom qpsmtpd process (not so much for content filtering, but to process these bounces accordingly). Our current configuration looks (in part) like this: main.cf (only customizations shown): 2526 inet n - - - 0 cleanup pickup fifo n - - 60 1 pickup -o content_filter=smtp:127.0.0.2 Our smtpd injects messages to postfix on port 2526, by speaking directly to the cleanup daemon. And the custom pickup command instructs postfix to hand off all locally-generated mail (from cron, nagios, or other custom scripts) to our custom smtpd. The problem is that this configuration does not affect postfix generated bounce messages, since they do not go through the pickup daemon. I have tried adding the same content_filter option to the bounce daemon commands, but it does not seem to have any effect: bounce unix - - - - 0 bounce -o content_filter=smtp:127.0.0.2 defer unix - - - - 0 bounce -o content_filter=smtp:127.0.0.2 trace unix - - - - 0 bounce -o content_filter=smtp:127.0.0.2 For reference, here is my main.cf file, as well: biff = no # TLS parameters smtpd_tls_loglevel = 0 smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key smtpd_use_tls=yes smtpd_tls_session_cache_database = btree:${queue_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${queue_directory}/smtp_scache smtp_tls_security_level = may mydestination = $myhostname alias_maps = proxy:pgsql:/etc/postfix/dc-aliases.cf transport_maps = proxy:pgsql:/etc/postfix/dc-transport.cf # This is enforced on incoming mail by QPSMTPD, so this is simply # the upper possible bound (also enforced in defaults.pl) message_size_limit = 262144000 mailbox_size_limit = 0 # We do our own message expiration, but if we set this to 0, then postfix # will try each mail delivery only once, so instead we set it to 100 days # (which is the max postfix seems to support) maximal_queue_lifetime = 100d hash_queue_depth = 1 hash_queue_names = deferred, defer, hold I also tried adding the internal_mail_filter_classes option to main.cf, but also tono affect: internal_mail_filter_classes = bounce,notify I am open to any suggestions, including handling our current content-filtering-loop in a different way. If it's not clear what I'm asking, please let me know, and I can try to clarify.

    Read the article

  • Removing a device in "removed" state from Linux software RAID array

    - by Sahasranaman MS
    My workstation has two disks(/dev/sd[ab]), both with similar partitioning. /dev/sdb failed, and cat /proc/mdstat stopped showing the second sdb partition. I ran mdadm --fail and mdadm --remove for all partitions from the failed disk on the arrays that use them, although all such commands failed with mdadm: set device faulty failed for /dev/sdb2: No such device mdadm: hot remove failed for /dev/sdb2: No such device or address Then I hot swapped the failed disk, partitioned the new disk and added the partitions to the respective arrays. All arrays got rebuilt properly except one, because in /dev/md2, the failed disk doesn't seem to have been removed from the array properly. Because of this, the new partition keeps getting added as a spare to the partition, and its status remains degraded. Here's what mdadm --detail /dev/md2 shows: [root@ldmohanr ~]# mdadm --detail /dev/md2 /dev/md2: Version : 1.1 Creation Time : Tue Dec 27 22:55:14 2011 Raid Level : raid1 Array Size : 52427708 (50.00 GiB 53.69 GB) Used Dev Size : 52427708 (50.00 GiB 53.69 GB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Intent Bitmap : Internal Update Time : Fri Nov 23 14:59:56 2012 State : active, degraded Active Devices : 1 Working Devices : 2 Failed Devices : 0 Spare Devices : 1 Name : ldmohanr.net:2 (local to host ldmohanr.net) UUID : 4483f95d:e485207a:b43c9af2:c37c6df1 Events : 5912611 Number Major Minor RaidDevice State 0 8 2 0 active sync /dev/sda2 1 0 0 1 removed 2 8 18 - spare /dev/sdb2 To remove a disk, mdadm needs a device filename, which was /dev/sdb2 originally, but that no longer refers to device number 1. I need help with removing device number 1 with 'removed' status and making /dev/sdb2 active.

    Read the article

  • Converting flv and mp4 video format to '.ogg' using FFmpeg

    - by user163906
    I have HostGator VPS server with FFmpeg installed. It allows me to convert .wmv to .flv as well as .mp4 files successfully using the following commands for flv and mp4: ffmpeg -i WantsABath.wmv -b 600k -r 24 -ar 22050 -ab 96k WantsABath.flv ffmpeg -i WantsABath.wmv WantsABath.mp4 but it won't allow me to convert any file format to .ogg. I tried using the command: ffmpeg -i input.mp4 -acodec libvorbis -vcodec libtheora -f ogv output.ogv by mondain but no luck with it. I am doubting that my VPS doesn't have libtheora installed. I tried configuring it by using SSH but I don't know how to make sure if it is installed or not. I tried checking with php_info but can't find anything regarding libtheora. Here's my FFmpeg version: FFmpeg version SVN-r19795, Copyright (c) 2000-2009 Fabrice Bellard, et al. configuration: --enable-libmp3lame --enable-libvorbis --disable-mmx --enable-shared --prefix=/usr/ --enable-gpl libavutil 50. 3. 0 / 50. 3. 0 libavcodec 52.35. 0 / 52.35. 0 libavformat 52.38. 0 / 52.38. 0 libavdevice 52. 2. 0 / 52. 2. 0 libswscale 0. 7. 1 / 0. 7. 1 This details doean't show libtheor Can anyone please suggest me something?

    Read the article

  • How can I redirect/forward all the UDP/TCP traffic on one interface to another interface in OpenWrt

    - by Sina Sou
    I am new to networking and I have a measurement device (D) that periodically sends all its readings over few UDP multicast sockets (with different multicast IP addresses and different port numbers). That device even listens to a TCP socket simultaneously to modify its configuration on port 7234. Since the device has just a Ethernet interface for communication and I want to make it work wireless, I decided to use a very small wireless open-wrt based router that attaches to the device (D) and redirect/forward all the network traffic(Both UDP/TCP) to the router wireless interface. In order to simplify the problem assume that the Device (D) establishes following sockets (at the same time) UM_SOCK1: UDP mcast socket on 239.1.2.3 port# 50620 UM_SOCK2: UDP mcast socket on 239.1.2.4 port# 50640 TC_SOCK3: TCP DHCP/STATIC ip address 192.168.1.200 port 7234 And (D) is connected to Open-Wrt router (R) via interface en01 (Ethernet) the router has it own wireless interface on (wlan0) I want all the traffic from interface pass through wlan01 and vice versa (bi-directional) en01 <---- wlan01 What would be the minimum iptables or ... commands that I need to make this possible? Even I am wondering if traffic directing can be made easier like if the direction is not going to be based on IP addresses(not desired if the device is connected via DHCP) I would rather redirection to be Interface(en0) based or on MAC address (The best solution since my device has unique MAC address)? Thanks

    Read the article

  • Booting Ubuntu as VM with KVM on Ubuntu 12.04

    - by CrazycodeMonkey
    I am trying to boot my very first VM using KVM. I have Ubuntu 12.04 installed, i made sure the BIOS had the right virtualization flag enabled for intel processor by running kvm-ok. I have researched this on google and all the instructions that i have found so far are outdated. for e.g. most instructions talk about booting a virtual machine with the following commands qemu-img create -f qcow2 foo.img 100G --- create a virtual disk for your VM kvm --name foo -m 1024 -hda foo.img -cdrom whatever.iso -boot d -- This runs kvm. This command line is incomplete. First you need to be root to run this. Second- it is missing option for the video device. When you run this command you get the following error "Could not initialize SDL(No available video device) - exiting" Googled this error and looked it up on stackover flow http://stackoverflow.com/questions/4841908/sdl-init-failure-reason-is-no-available-video-device The answer provided here does not work on Ubuntu 12.04 Googled this problem further and found out that i need to specify a video device so I finally ran the following command sudo kvm --name mymachine -m 8096 -hda myimage.img --cdrom ubuntu.iso -boot d -vga cirruss -k en-us -vmc :0 This was after I had created the myimage.img image on the drive. Now this command does not give me an error but it just hangs. Does anyone have clear instructions on how to run a VM using KVM on Ubuntu?

    Read the article

  • Help, my CentOS servers keep going down , No route to host after a random uptime

    - by user249071
    Hello , I have a couple of Centos linux servers, that have a very simple task, they run nginx + fastcgi for php , and some NFS mounts between them, readonly They have some RPC commands to start some downloading processes with wget, nothing fancy , from a main server, but their behavior is very unstable, they simply go down, we tried to monitor ram , processor usage, even network connections, they don't load up so much, max network connections up to... 250 max, 15% processor usage and memory , well, doesn't even fill up, 2.5GB from 8GB max , I have no ideea why can a linux server go down like that, they aren't even public servers, no domain names installed no public serving, for sites. The only thing that I've discovered was that if i didn't restart the network service every couple of hours or so... the servers were becoming very slow, starting apps very slow, but not repoting a high usage of resources...Maybe Centos doesn't free the timeout connections, or something like that...It's based on Red Hat right? I'm not a linux expert , but I'm sure that there are a few guys out there that can easily have an answer to this , or even have some leads to what i can do ... I haven't installed snort, or other things to view if we have some DOS attacks, still the scheduled script that restarts the network each hour should put the system back online, and it doesn't.... Thank you in advance

    Read the article

  • Set up Linux box as WAP for MyBookLive?

    - by AcidFlask
    I inherited an old Linux box as well as a MyBookLive and would like to make the MyBookLive available over my wireless, essentially using the Linux box as a wireless access point. I just wiped the Linux box (home) and installed Ubuntu 12.04 on it. My network setup currently looks like this: (192.168.0.1 netmask 255.255.255.0) ISP --- wireless router --- wlan0 on home (192.168.0.12) | eth0 on home --- MyBookLive MacBook (192.168.0.11) so that the MyBookLive is basically a glorified external hard drive. The router does have an Ethernet port, but it is being used by my roommate's computer so I can't plug the MyBookLive directly into it. Right now I can ping MyBookLive.local and MacBook.local from home, but I am having trouble understanding and figuring out what the correct iptables commands are to make my MacBook see my MyBookLive through the Bonjour network. Also, I'm not sure if I need to set up DNS to forward xxx.local Bonjour/Zeroconf addresses. I tried the following to forward my entire wired network (which has only my MyBookLive) to a single IP address: sysctl net.ipv4.ip_forward=1 iptables -A FORWARD -i wlan0 -o eth0 -j ACCEPT iptables -A FORWARD -i eth0 -o wlan0 -j ACCEPT iptables -t nat -A PREROUTING -i eth0 -p tcp -j DNAT --to 192.168.0.66 iptables -t nat -A PREROUTING -i eth0 -p udp -j DNAT --to 192.168.0.66 but I can't ping this address from my MacBook. This is probably horribly wrong, but I am a complete noob at setting up this kind of network and could use some expert help with setting this up properly.

    Read the article

  • Unable to load Windows after using EasyBCD to Reset bcd [duplicate]

    - by johnny
    This question already has an answer here: How can I repair the Windows 8 EFI Bootloader? 9 answers My windows installation was working perfectly fine until i clicked "Reset BCD" in EasyBCD in Windows 8. After clicking that EasyBCD told me to add Win 8 entry via Add Entry Menu so i did. After restart, win 8 would not start. Neither would recovery F11. Attempts i made to Restore : Ran boot-repair from ubuntu live cd several time. Used Win8 system recovery disc created via virtualbox with win 8 preview iso. Automated repair from Win8 system recovery disc Ran following commands from cmd started from Win8 system recovery disc bootrec /fixmbr Result : Success message bootrec /rebuildbcd Result : after hitting (Y) "The requested system device cannot be found" System refresh started from Win8 system recovery disc gives error that device is locked. System reset started from Win8 system recovery disc gives error that required partition or device is missing or not accessible. Used automated repair from EasyRE disc. It gave success message. Used Fix boot problem from Macrium reflect winPE repair disc. Copied Recovery partition to usb. Booting from usb gave this error Your PC needs to be repaired. Error Code : 0XC000000f Press Enter to try again Press F8 for Startup Settings F8 & Enter does nothing I cannot install WIn7 or Win 8, error it gives : "windows cannot be installed on this disk. The selected disk is of the GPT partition style."

    Read the article

  • HTC Diamond Touch sync problem

    - by Anders
    I have a HTC Diamond Touch with all my contacts etc. on it. Did however not use it for 6mo while being abroad. When I start the phone now I realize that the touch screen has stopped working. I have tried restarting, soft resetting, shutting it off etc but the touch just wont follow commands. However, I can manage the phone by buttons so it's not frozen. Hence I can get into the phone and watch contacts but not use it to call etc. The problem is, how do I get my 300 contacts out of the thing!? When I'm plugging in the phone, it lets me choose between "Sync with Outlook" and "Use as storage device". It automatically selects "Use as storage device". Now, I cannot choose to sync it with the buttons. I can not change this option afterwards either. In short, I have a phone with all of my contact data and am completely unable to get that out of it. Any tips/help/suggestions? If possible, preferably one that does not including sending the phone to a hardware workshop for three weeks in order to get it fixed:)

    Read the article

< Previous Page | 161 162 163 164 165 166 167 168 169 170 171 172  | Next Page >