Search Results

Search found 20755 results on 831 pages for 'custom map'.

Page 668/831 | < Previous Page | 664 665 666 667 668 669 670 671 672 673 674 675  | Next Page >

  • Unable to connect to EC2 instance after "reboot"

    - by KPL
    I am not able to connect to my m1.small instance after rebooting it. I have already associated the public IP with this instance. Upon checking the system log, this seems to be the issue: cloud-init-nonethttp://11.84: waiting 10 seconds for network device cloud-init-nonethttp://21.85: waiting 120 seconds for network device cloud-init-nonethttp://141.85: gave up waiting for a network device. Cloud-init v. 0.7.3 running 'init' at Sun, 18 May 2014 07:02:55 +0000. Up 142.54 seconds. ci-info: +++++++++++++++++++++++Net device info++++++++++++++++++++++++ ci-info: +--------+-------+-----------+-----------+-------------------+ ci-info: | Device | Up | Address | Mask | Hw-Address | ci-info: +--------+-------+-----------+-----------+-------------------+ ci-info: | lo | True | 127.0.0.1 | 255.0.0.0 | . | ci-info: | eth0 | False | . | . | 02:43:xx:xx:xx:xx | ci-info: +--------+-------+-----------+-----------+-------------------+ ci-info: !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!Route info failed!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! A bunch of these follow the above message: 2014-05-18 07:02:56,178 - url_helper.pyWARNING: Calling http://169.254.169.254/2009-04-04/meta-data/instance-id failed 0/120s: request error [HTTPConnectionPool(host='169.254.169.254', port=80): Max retries exceeded with url: /2009-04-04/meta-data/instance-id (Caused by : Errno 101] Network is unreachable) This is obviously related to the network interface not being working correctly. I have tried this so far: Relaunch a new instance from the custom AMI (created from EBS) of the failing instance. The same error shows up in the logs. Attach a new network interface to the EC2 instance. The error still persists. eth1 shows up in the list but the "up" column is False.

    Read the article

  • HLS video segmenting complications. How to create a transport stream with ffmpeg

    - by Agzam
    I have h264 videos, and currently we're using Apple's HTTP Video Streaming tools and mediafilesegmenter to segment these files. What I need to do is to switch to alternative segmenter based on this very popular open-sourced segmenter The problem is that this segmenter does not just take any video, but takes only MPEG-TS videos. So I have to convert my h264 videos to TS first. I can do that with ffmpeg. I'm using this: ffmpeg -i encoded.mp4 -vcodec h264 -i encoded.mp4 -sameq -acodec aac -strict experimental -f mpegts output.ts But this creates fairly larger output. And the reason is that Apple's segmenter keeps the same codec - AVC and the same audio codec - AAC, whereas ffmpeg changes video format to MPEG Video. The question is: can I somehow keep the same AVC video codec and still convert video to a transport stream? So my goal is to keep the same video quality and same video codecs as Apple's medifilesegmenter does. UPD: Okay... it seems that ffmpeg CAN split videos into segments: ffmpeg -i encoded.mp4 -c copy -map 0 -vbsf h264_mp4toannexb -f segment -segment_time 10 -segment_list test.m3u8 -segment_format mpegts segment%d.ts That's still has one problem: it doesn't create http live streaming index file. (-segment_list creates a file with list of segments, but it doesn't look like HLS index). So, you still have to create index file

    Read the article

  • client flips between internal and external IP addresses??

    - by jmiller-miramontes
    I have what seems like a not-particularly-complicated home network, all things considered: a DSL line comes in to a modem/router, which goes off to a switch, which supports a bunch of machines. My machines live in a 192.168.0.x address space; however, I'm running some public servers on the network, so I have a block of 8 (5, really) static IP addresses that are mapped to the servers by the router. The non-servers get 192.168.0.x addresses via NAT; some machines have static addresses and some get addresses from DHCP. Locally, I'm running a DNS server (named) to map between the domain names and the 192.168 address space. Somewhat messy, but everything basically works. Except: One of my local non-server clients occasionally switches from its internal address to its external address. That is, if I check the logs of a website I'm running internally, the hits coming from this client sometimes show up with the internal 192.168 address, and sometimes with the external (216.103...) address. It will flip back and forth for no apparent reason, without my doing anything. This can be a problem in terms of how the clients interact with the way I have some of the clients' SSH systems configured (e.g., allowing access from the internal network but not the external network), but it also Just Seems Wrong. I will confess that I'm kinda skating on the very edge of my networking competence here, but I can't for the life of me figure out what's going on. If it helps, the client in question is running Mac OS X / 10.6; its address is statically assigned, is not one of the five externally-accessible addresses, and gets its DNS from (first) the internal DNS server and (second) my ISP's DNS servers. I can't swear that none of the other NAT clients are also showing this problem; the one I'm dealing with is my everyday machine, so this is where I run into it. Does anybody out there have any advice? This is driving me crazy...

    Read the article

  • Alternatives to Splunk?

    - by MichaelGG
    I'm pretty impressed with Splunk, especially version 4. Pretty graphs, alerting (Enterprise only), and fast, accurate, searching. It's a great product. However, the cost just way too high to consider for full production use for our company. All we really need is to be able to index different logs in a central place, and have reasonable searching on that. Having alerts based on a saved search is also really nice. We don't really go beyond that. In fact, our biggest usage has been in deploying new applications. Everything gets logged via log4net to either the Event log on Windows or a text file on Linux. Splunk makes it pretty easy to quickly search across those to make sure all the parts of the app are working ok -- that's saved us tons of time versus hunting down individual logging sources. What alternatives exist in this market? I have a sinking feeling Splunk's pricing is so high because they have the best product by far, and they know it. We want the server to run on Windows. I'd be open to a split model, using one product for general logs (collect via syslog/Snare), and a dedicated product for our custom apps (like Log4Net Dashboard). Would using a simple syslog server such as Kiwi, sent to SQL Server (perhaps with fulltext enabled) work? I'd hope the cost should be well under 5 figures, USD. (And yes, I know, we're cheap. We're a startup with little money, and BizSpark takes care of all our MS licensing.) Edit: I should add, we have about 10 physical servers, 20 VMs, and a couple firewalls and switches. 90% is Windows.

    Read the article

  • VMware virtual machine network devices malfunctioning

    - by sheepz
    I'm running Ubuntu 10.04 LTS and VMvware workstation 7.0.1 build-227600. The virtual machine i'm running in VMware is a custom distribution built on Debian Linux version 3.1. I'm still pretty much a beginner with UNIX administration. After having messed around with the vmware (changed only the name of the folder, the vmx and and other .v* files accordingly in which the .vmx was situated, and the configuration in the vmx file accordingly), the network devices on the virtual machine do not work anymore. The virtual machine is used for securely sending messages. The virtual machine: As far as I know, this perl file called proxy-gen-ifalias eth0 is responsible for properly setting up the two virtual network devices eth0 and eth1. The Virtual machine comes with a GUI interface in which I have set up two ethernet network devices, one internal, the other external. Now, after having messed around with this, the UI gives me this error message: perl proxy-gen-ifalias eth0 /etc/modprobe.d/alias-eth0 /sbin/update-modules perl proxy-gen-ifalias eth1 /etc/modprobe.d/alias-eth1 /sbin/update-modules ifdown eth0 ifdown: interface eth0 not configured ifdown eth1 ifdown: interface eth1 not configured perl proxy-gen-netcfg /etc/network/interfaces ifup eth0 SICCSIFADDR: No such device eth0: ERROR while getting interface flags: No such device SIOCSIFNETMASK: No such device eth0: ERROR while getting interface flags: No such device Failed to bring up eth0. ifconfig eth0 eth0: error fetching interface information: Device not found make: *** [/etc/network/interfaces] Error 1 ~ Here are the contents of the two perl files referred to in the message: paste.pocoo.org/show/2AMzAYhoCRZqlGY7wUFk/ proxy-gen-netcfg

    Read the article

  • Looking for a help desk ticketing system..

    - by Dan
    Hi guys Im looking for a good help desk ticket solution. It must perform the following actions for it to be useful. It needs to have a single point of contact via email..e.g [email protected] If we recieve a telephone(or an email outside of the system) we need to be able to create a ticket as if had been added via the single point of contact, this needs to be done with ease in order to save time. Certain people within our organisation deal with certain customers, so if the email/ custom input support call as mentioned in bullet 2 is picked up as having a relationship with that certain person in our organisation it needs to be sent to them/put in their queue for them to work on. If a person is out of office or sick any tickets sent to them must be forwarded to somebody else or put into a seperate pool of tickets that anybody can access. Perhaps have an agent that sorts through tickets in the pool and assigns them to anybody who is available, preferably the person with fewest tickets in their queue/list. Once a customer emails and the system logs it they immediately get a response with a ticket number and maybe details of who is dealing with the problem. Any correspondance in relation to a particular ticket is automatically grouped into some sort of message, and not made into a load of separate tickets. I.e system scans incoming email subjects for ticket numbers and assosciates it with exisiting tickets if that number exists. Any help is much appreciated Thanks P.S I have taken a look at OTRS but i'm not feeling it so unless someone can convince me I guess i'm after an alternative.

    Read the article

  • Can IIS (Ideally Azure) do SSL Proxying?

    - by Acoustic
    My team has been asked to add a new feature to a project we're working on, and none of can find authoritative details on whether it's possible with Windows/IIS. The short of it is that we're hoping to have customers update their DNS with a CNAME record to point their website to our server instead of theirs (they why's are trivial - it's what the app does on behalf of your site). We're using a reverse proxy with several custom modules to serve particular content from the original servers. So far everything works perfectly until we encounter SSL. Is there a way to have IIS serve up an SSL certificate from another server? In other words, is there a way to be a trusted man in the middle? I'm hoping that's possible so that we don't have to require all our clients to re-issue their SSL certs. Frankly, we don't want to have to manage hundreds of certs. I'd also like to avoid a UCC situation if there's a way to because it seems to require re-creating the cert each time a client is added. So, any pointers on proxying/hosting SSL (or even dynamic SSL hosting like http://www.globalsign.com/cloud/) would be appreciated.

    Read the article

  • SharePoint Backup/Restore without stsadm

    - by Kevin
    Due to problems we found with the restore of sites/site collections using stsadm (our tasks generated from workflows were not restored), we've taken a different route for backup/restore. We plan a major customization to our SP site and want to take a backup so we can rollback in case the install fails. In our System Testing (not production) environment, we've backed up the 12 hive, the virtual dir's that the IIS points to SharePoint, and the SharePoint databases in SQL (using SQL server to do the db backups). We have custom event handlers and workflows built with Visual Studio, and deploy the dlls to the GAC as version 2 (signed and versioned in Visual Studio). So when we deploy, the GAC will contain 2 versions of the workflows - version 1 and version 2. During the deploy we use SP stsadm features to install/activate the WF's. We also go to each library and add the new, version 2 WFs. This automatically sets the version 1 WF's to "Not Allow" new instances (which is what we want) and the version 2 as active - perfect so far. When we've completed the install, we then assume a failure and attempt to restore to the same machines (SharePoint on one server, SQL on another). We start by uninstalling the version 2 WF's from the GAC, reset IIS (to clear cache of these ver. 2 WF dlls'), restore the 12-hive and virtual directory folders, then restore the SQL dbs. This is all just as manual as you read it - no stsadm here. All seems to work after our restore, it appears the restore was successful - the mods we made to column names, data changes, etc during the install are all reverted back to the original pre-install state. With one exception. When we run a workflow, it always fails and the Logs in the 12-hive indicates the WF is still trying to use the version 2 of the dll (System.IO file not found error) We think we've backed up and restored all the moving pieces of Sharepoint but we're missing something here, does anybody have any ideas why the version 2 WF dlls are still being referenced eventhough we restored all the folders and db's of SharePoint? Thanks, Kevin

    Read the article

  • Wireless disconnect randomly with wpa_supplicant reason=2

    - by renenglish
    I installed ubuntu-server 12.04 on my PC , and I use an usb wireless card to join the network. It works ok when I boot up my PC , but the wireless disconnects after a while. I pkill wpa_supplicant and reload the driver rtl8192cu , then it works a again. Then it disconnect again after about a random minutes. Here is the syslog: 22384 May 29 21:49:27 homecenter kernel: [ 6450.459313] wlan1: authenticated 22385 May 29 21:49:27 homecenter kernel: [ 6450.459535] wlan1: associate with f4:ec:38:45:62:74 (try 1) 22386 May 29 21:49:27 homecenter kernel: [ 6450.469080] wlan1: RX AssocResp from f4:ec:38:45:62:74 (capab=0 x431 status=0 aid=3) 22387 May 29 21:49:27 homecenter kernel: [ 6450.469085] wlan1: associated 22388 May 29 21:49:27 homecenter wpa_supplicant[2342]: Associated with f4:ec:38:45:62:74 22389 May 29 21:49:27 homecenter kernel: [ 6450.481933] ADDRCONF(NETDEV_CHANGE): wlan1: link becomes ready 22390 May 29 21:49:27 homecenter wpa_supplicant[2342]: WPA: Key negotiation completed with f4:ec:38:45:62:7 4 [PTK=CCMP GTK=CCMP] 22391 May 29 21:49:27 homecenter wpa_supplicant[2342]: CTRL-EVENT-CONNECTED - Connection to f4:ec:38:45:62: 74 completed (auth) [id=0 id_str=] 22392 May 29 21:49:38 homecenter kernel: [ 6461.472014] wlan1: no IPv6 routers present 22393 May 29 21:49:38 homecenter ntpdate[2263]: step time server 91.189.94.4 offset 0.012758 sec 22394 May 29 21:49:51 homecenter ntpdate[2404]: step time server 91.189.94.4 offset -0.001190 sec 22395 May 29 21:54:38 homecenter kernel: [ 6762.052030] wlan1: deauthenticated from f4:ec:38:45:62:74 (Reas on: 2) 22396 May 29 21:54:38 homecenter wpa_supplicant[2342]: CTRL-EVENT-DISCONNECTED bssid=f4:ec:38:45:62:74 reas on=2 22397 May 29 21:54:38 homecenter kernel: [ 6762.064744] cfg80211: All devices are disconnected, going to re store regulatory settings 22398 May 29 21:54:38 homecenter kernel: [ 6762.064752] cfg80211: Restoring regulatory settings 22399 May 29 21:54:38 homecenter kernel: [ 6762.064757] cfg80211: Calling CRDA to update world regulatory d omain 22400 May 29 21:54:38 homecenter kernel: [ 6762.069938] cfg80211: Ignoring regulatory request Set by core s ince the driver uses its own custom regulatory domain 22401 May 29 21:54:38 homecenter kernel: [ 6762.069943] cfg80211: World regulatory domain updated: 22402 May 29 21:54:38 homecenter kernel: [ 6762.069945] cfg80211: (start_freq - end_freq @ bandwidth), (max_antenna_gain, max_eirp) 22403 May 29 21:54:38 homecenter kernel: [ 6762.069949] cfg80211: (2402000 KHz - 2472000 KHz @ 40000 KH z), (300 mBi, 2000 mBm) 22404 May 29 21:54:38 homecenter kernel: [ 6762.069952] cfg80211: (2457000 KHz - 2482000 KHz @ 20000 KH z), (300 mBi, 2000 mBm) 22405 May 29 21:54:38 homecenter kernel: [ 6762.069956] cfg80211: (2474000 KHz - 2494000 KHz @ 20000 KH z), (300 mBi, 2000 mBm) 22406 May 29 21:54:38 homecenter kernel: [ 6762.069959] cfg80211: (5170000 KHz - 5250000 KHz @ 40000 KH z), (300 mBi, 2000 mBm) 22407 May 29 21:54:38 homecenter kernel: [ 6762.069962] cfg80211: (5735000 KHz - 5835000 KHz @ 40000 KH z), (300 mBi, 2000 mBm)

    Read the article

  • How to plug power/reset buttons from case to motherboard leads?

    - by MaxMackie
    I have a motherboard I salvaged from a pre-assembled computer. Except now I'm trying to use it in my own custom build. The problem is, this motherboard doesn't have any documentation because it was never meant to be used by consumers (as far as I know). I need to plug in my case's power/reset/hdd-light plugs into the motherboard. I usually check the documentation of the board to see which leads go to what connector, but I have no documentation for the board. So, as I see it, I have two options: I find the documentation (I've emailed gateway customer service, but I'm unsure of how successful I'll be with that). I simply test the leads one after the other (can this cause damage if plugged into the wrong leads?) However, there might even be a standard for which leads do what action (I'm not sure about this). For reference, my motherboard's SN/MD (?) is: H57M01G1-1.1-8EKS3H Does anyone have any idea if I can find documentation or find another way to be sure if my connections are correct?

    Read the article

  • Exim 4 Virtual Domains and Catchall on Debian (Squeeze)

    - by parazuce
    Hello, I've been at it for about 4 hours now. Searching as well as trying different tutorials. Here's my setup: I have 2 domains, both under my own DNS server (MX records setup as well). I have exim4 successfully running, and it is able to send messages from both of those domains. I have tested this using sendmail, and manually setting the "From" attribute. Exim successfully delivers mail to users no matter which domain was specified. I'm fine with that, but I'm having an issue editing virtual domains, and adding custom delivery options (such as a catch all). I've been searching for about 4 hours, and I can't find any up-to-date documentation on how to do this. The old methods would be to add a line such as: domainlist local_domains = @:localhost:dsearch;/etc/exim4/virtual Once that line was added, I made a directory at /etc/exim4/virtual, then created files inside such as example.com which would then contain rules for delivery under that domain. This did not work, however. Searching further, I've found that exim no longer supports dsearch (I guess because they claim it never has?) This is where I'm stuck. I'm on a "split" configuration as well.

    Read the article

  • Are HDMI to VGA Adapters Really Device-Specific?

    - by allquixotic
    There are a lot of devices on the market right now (especially mobile devices) with a Micro-HDMI or Mini-HDMI port and no VGA or D-Sub output. Most manufacturers of said devices sell a cable that looks something like this: I have yet to find a cable like this that claims to work on a wide array of devices. In general, these cables claim to work with one specific device only. The way these cables work, I think, is that analog VGA signals are sent from the HDMI port on the device. This should work for devices that have special hardware on the motherboard/GPU capable of driving this. Is it the case that these cables have to be custom designed for each device? Or, is it rather that any device which possesses this special "signaling of analog VGA over the HDMI port" can be made to work with a cable that is physically compatible (i.e. the HDMI end plugs into the device and the VGA end accepts a VGA monitor cable)? Note that I am not looking for a product recommendation, just a conceptual clarification on what exactly these devices are doing. Also, a few remarks: The cables like the one depicted here are not digital to analog converters. I know about these: they are expensive, and they are the ONLY solution if your device only outputs a digital signal and is incapable of driving analog VGA over the HDMI port. The cables like the one depicted here are not straight crossover cables from VGA to HDMI, either. The crossover cables are designed to send a digital HDMI signal over the VGA port's wires; that is, the wire protocol is HDMI (digital) but the physical pinout is the same as VGA, even though nothing analog is happening. Once again, this is not the behavior that, I believe, the devices which I'm talking about in this question are doing. The cabling and devices that this question is about transmit the analog VGA data over the HDMI port (the HDMI port is in the device outputting the data, and the VGA side is the monitor/projector).

    Read the article

  • How can I explain to dspam that the user "brandon" is the same as "brandon@mydomain"

    - by Brandon Craig Rhodes
    I am using dspam for spam filtering by running the "dspamd" daemon under Ubuntu 9.10 and then setting up a Postfix rule that says: smtpd_recipient_restrictions = ... check_client_access pcre:/etc/postfix/dspam_everything ... where that PCRE map looks like this: /./ FILTER lmtp:[127.0.0.1]:11124 This works well, and means that all users on my system get all of their email, whether "dspam" thinks it is innocent or not, and have the option of filtering on its decisions or ignoring them. The problem comes when I want to train dspam using my email archives. After reading about the "dspam" command, I tried this on the files in my Inbox and spam boxes (which date from when I was using another filtering solution): for file in Mail/Inbox/*; do cat $file | dspam --class=innocent --source=corpus; done for file in Mail/spam/*; do cat $file | dspam --class=spam --source=corpus; done The symptom I noticed after doing all of this was that dspam was horrible at classifying spam — it couldn't find any! The problem, when I tracked it down, was that I was training the user "brandon" with the above commands, but the incoming email was instead compared against the username "brandon@mydomain", so it was running against a completely empty training database! So, what can I do to make the above commands actually train my fully-qualified email address rather than my bare username? I would like to avoid having to run "dspam" as root with a "--user" option. I would have expected that the "dspam" configuration files would have had an "append_domain" attribute or something with which to decorate local usernames with an appropriate email domain, but I can't find any such thing. When I used to use the Berkeley DB backend to "dspam", I solved this problem by creating a symlink from one of the databases to the other. :-) But that solution eventually died because the BDB backend is not thread-safe, so now I have moved to the PostgreSQL back-end and need a way to solve the problem there. And, no, the table where it keeps usernames has a UNIQUE constraint that prevents me from listing both usernames as mapping to the same ID. :-)

    Read the article

  • How to set up IP forwarding on Nexenta (Solaris)?

    - by Gleb
    I am trying to set up IP forwarding on my Nexenta box: root@hdd:~# uname -a SunOS hdd 5.11 NexentaOS_134f i86pc i386 i86pc Solaris The box has 2 network interfaces: root@hdd:~# ifconfig -a lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1 inet 127.0.0.1 netmask ff000000 e1000g1: flags=1001100843<UP,BROADCAST,RUNNING,MULTICAST,ROUTER,IPv4,FIXEDMTU> mtu 1500 index 2 inet 192.168.12.2 netmask ffffff00 broadcast 192.168.12.255 ether 68:5:ca:9:51:b8 myri10ge0: flags=1100843<UP,BROADCAST,RUNNING,MULTICAST,ROUTER,IPv4> mtu 9000 index 3 inet 10.10.10.10 netmask ffffff00 broadcast 10.10.10.255 ether 0:60:dd:47:87:2 lo0: flags=2002000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv6,VIRTUAL> mtu 8252 index 1 inet6 ::1/128 192.168.12.0 is my normal LAN with 192.168.12.1 being the firewall/gateway 10.10.10.0 is a separate LAN for iSCSI (with no internet access) I want to set up IP forwarding so that a computer on 10.10.10.0 will be able to access the internet by using 10.10.10.10 as a gateway (I don't need any port forwarding) I have turned on IP forwarding: root@hdd:~# routeadm Configuration Current Current Option Configuration System State --------------------------------------------------------------- IPv4 routing disabled disabled IPv6 routing disabled disabled IPv4 forwarding enabled enabled IPv6 forwarding disabled disabled Routing services "route:default ripng:default" Routing daemons: STATE FMRI disabled svc:/network/routing/rdisc:default disabled svc:/network/routing/route:default disabled svc:/network/routing/legacy-routing:ipv4 disabled svc:/network/routing/legacy-routing:ipv6 disabled svc:/network/routing/ripng:default online svc:/network/routing/ndp:default But when I dry to start ipnat, I get an error: root@hdd:~# ipnat -CF -f /etc/ipf/ipnat.conf ioctl(SIOCGNATS): I/O error Here is the config: root@hdd:~# cat /etc/ipf/ipnat.conf #!/sbin/ipnat -f - # map e1000g1 10.10.10.10/24 -> 192.168.12.2/32 So the question is how to fix this.. Thanks in advance!

    Read the article

  • How do I use a self encrypting drive?

    - by Unique_Key
    I recently purchased a Micron RealSSD c400 self encrypting drive, and I am having a few issues when trying to get it recognized by my laptop (HP Elitebook 8440p running Windows 7 x64; also tried on a custom-built desktop). When I try to initialize the drive from disk management, I get a CRC error; also, when attempting to partition it from Windows setup, the program can't create the partitions. I also tried with UBCD, nothing. I assume this is due to drive security, but I haven't been able to find much information about this online; do I need a management software or something? I'm completely stumped here. EDIT As requested, when I try partitioning the device from Windows setup I get a 0x80300024 error; when I try initializing it from disk management, I get a "Data error (cyclic redundancy check)" message, and the event log shows the following under System: Source: VDS Basic Provider, message: unexpected failure. error code 490@01010004 (2x) Source: Virtual Disk Service, message: VDS fails to write boot code on a disk during clean operation. Error code: 80070001@02070008 (1x) Source: Disk, message: The device \Device\Harddisk2\DR2 has a bad block (2x) The security logs show nothing related. Also, when attempting to configure it from UBCD (utility: HDAT2), I get an error along the lines of "can't edit partition information" or something to that tune.

    Read the article

  • LDAP not showing secondary groups

    - by Sandy Dolphinaura
    Currently, I have a LDAP server (running ClearOS if that makes any difference) containing a database of users. So, I went and setup LDAP on a couple of my debian VMs, using libpam-ldapd and I discovered this odd problem. My group/user mapping would show up when running getent group but the secondary groups would not show up when running id . Here is my /etc/nslcd.conf # /etc/nslcd.conf # nslcd configuration file. See nslcd.conf(5) # for details. # The user and group nslcd should run as. uid nslcd gid nslcd # The location at which the LDAP server(s) should be reachable. uri ldaps://10.3.0.1 # The search base that will be used for all queries. base dc=pnet,dc=sandyd,dc=me # The LDAP protocol version to use. #ldap_version 3 # The DN to bind with for normal lookups. binddn cn=manager,ou=internal,dc=pnet,dc=sandyd,dc=me bindpw Me29Dakyoz8Wn2zI # The DN used for password modifications by root. #rootpwmoddn cn=admin,dc=example,dc=com # SSL options ssl on tls_reqcert never # The search scope. #scope sub #filter group (&(objectClass=group)(gidNumber=*)) map group uniqueMember member

    Read the article

  • How can I use the Homebrew Python with Homebrew MacVim on Mountain Lion?

    - by Stephen Jennings
    I originally asked and answered this question: How can I use the Homebrew Python version with Homebrew MacVim? These instructions worked on Snow Leopard using Xcode 4.0.1 and associated developer tools. However, they no longer seem to work on Mountain Lion with Xcode 4.4.1. My goal is to leave the system's version of Python completely untouched, and to only install PyPI packages into Homebrew's site-packages directory. I want to use the vim_bridge package in MacVim, so I need to compile MacVim against the Homebrew version of Python. I've edited the MacVim formula to add these to the arguments: --enable-pythoninterp=dynamic --with-python-config-dir=/usr/local/Cellar/python/2.7.3/Frameworks/Python.framework/Versions/Current/lib/python2.7/config Then I install with the command: brew install macvim --override-system-vim --custom-icons --with-cscope --with-lua However, it still seems to be somehow using Python 2.7.2 from the system. This seems strange to me because it also seems to be using the correct executable. :python print(sys.version) 2.7.2 (default, Jun 20 2012, 16:23:33) [GCC 4.2.1 Compatible Apple Clang 4.0 (tags/Apple/clang-418.0.60)] :python print(sys.executable) /usr/local/bin/python $ /usr/local/bin/python --version Python 2.7.3 $ /usr/local/bin/python -c "import sys; print(sys.version)" 2.7.3 (default, Aug 12 2012, 21:17:22) [GCC 4.2.1 Compatible Apple Clang 4.0 ((tags/Apple/clang-421.0.60))] $ readlink /usr/local/lib/python2.7/config /usr/local/Cellar/python/2.7.3/Frameworks/Python.framework/Versions/Current/lib/python2.7/config I've removed everything in /usr/local and reinstalled Homebrew by running these commands: $ ruby <(curl -fsSkL raw.github.com/mxcl/homebrew/go) $ brew install git mercurial python ruby $ brew install macvim (nope, still broken) $ brew remove macvim $ ln -s /usr/local/Cellar/python/..../python2.7/config /usr/local/lib/python2.7/config $ brew install macvim

    Read the article

  • nVidia performance with newer X and newer driver abysmal with Compiz

    - by Nakedible
    I recently upgraded Debian to Xorg 2.9.4 and installed nvidia-glx from experimental, version 260.19.21. This was somewhat of an uphill battle as the dependencies for the experimental nvidia-glx package are still somewhat broken. I got it to work without forcing the installation of any packages and without modifying the packages. However, after the upgrade compiz performance has been abysmal. I am using the desktop wall plugin and switching viewports is really slow - takes a few seconds for each switch. In addition to this, every effect that compiz does, such as zoom animations for icons when launching applications, takes seconds. The viewport switching speed changes relative to the amount of windows on that virtual screen - empty screens switch almost at normal speed, single browser windows work almost decently, but just 4 rxvt terminals slows the switches down to a crawl. My compiz configuration should be pretty basic. Xorg is likewise configured without anything special - the only "custom" configuration is forcing the driver name to be "nvidia". I've fiddled around with the nvidia-settings and compizconfig trying different VSync settings, but none of those helped. My graphics card is: NVIDIA GPU NVS 3100M (GT218) at PCI:1:0:0 (GPU-0). This is laptop GPU that is from the Geforce GTX 200 series. Graphics card performance should naturally be no problem.

    Read the article

  • Completely automated DVD insert-rip-compress-eject workflow

    - by Kevin L.
    (Partially inspired by this question.) Background: I have a PC hidden away behind an HD LCD in custom-built entertainment center. The only visible part of the PC is an external DVD drive, mounted above the Wii. The PC happens to have Windows XP on it; Hackintoshing and Linux might be possible, but I've had issues with drivers for the sound card before. Let's just assume that OS X and Linux are a no-go unless they provide a truly awesome and simple solution for this particular problem. Goal: I would like to have a completely automated workflow for ripping DVDs. Something like this: Push the eject button on the DVD drive, insert the DVD. PC recognizes that this is a video DVD (as opposed to data). PC rips DVD to hard drive. PC finishes ripping, and ejects the DVD tray. PC compresses DVD image into some format that an Xbox 360 can read. PC copies finished compressed video file to a particular folder, so that it can be read into a WMP11 library and seamlessly played by the Xbox 360. PC cleans up all temporary files. Done. The impetus to have this be completely automated is that I’ll never need to switch the TV to the PC’s input and fiddle with the wireless keyboard. That’s just needless user intervention. The UI doesn’t have to be pretty. Nor do I care about speed. And I can probably bridge several of the gaps with some creative Perl use. But it seems likely that many (or all) of the parts should already exist. Any thoughts?

    Read the article

  • Joining Samba to Active Directory with local user authentication

    - by Ansel Pol
    I apologise that this is somewhat incoherent, but hopefully someone will be able to make enough sense of this to understand what I'm trying to achieve and provide pointers. I have a machine with two network interfaces connected to two different networks (one of which it's providing several other services for, such as DNS), running two separate instances of Samba, one bound to each interface. One of the instances is just a workgroup-style setup using share-level authentication, which is all working fine. The problem is that I'm looking to join the other instance to an MS Active Directory domain (provided by MS Windows Small Business Server 2003) to enable a subset of the domain users to access the shares from Windows machines on the other network. The users who need access from the domain environment have accounts (whose names are all-lowercase versions of their domain usernames) on the machine running Samba, but I'm not sure about how to map the UIDs and everything I've read concerns authenticating accounts on that machine against either AD or another LDAP server. To clarify: I only want the credentials for AD users accessing the non-workgroup Samba instance to be authenticated against AD, not the accounts on the machine running Samba. I hope this is sufficiently clear. EDIT: In addition to being able to access the Samba shares from AD, I do also need to be able to access a share on the domain from the machine running Samba but would still like everything non-Samba-related to authenticate locally.

    Read the article

  • Apache ProxyPassReverse and https

    - by joshuaball
    Hi, I would like to map all traffic on 80 and 443 from foo.com to an internal server: 192.168.1.101. I have a VirtualHost (Apache 2.2 on Ubuntu) setup as follows (note, I had to break up the hyperlinks below because I am a 'new user'): <VirtualHost *:80> ServerName foo.com ServerAlias *.foo.com ProxyRequests Off ProxyPreserveHost On <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPass / http://192.168.1.101/ ProxyPassReverse / http://192.168.1.101/ </VirtualHost> And that works great for http traffic. However, I can't seem to do the same thing for https. I have tried: Changing VirtualHost *:80 to * - but that doesn't work (I need it http-http and https-https) Creating a new VirtualHost entry for *:443 that redirects to http://192.168.1.101/, but that fails as well (browser timeouts) I did some searching, here and elsewhere, and the closest question I could find was this, but that didn't quite answer it. Also, just out of curiosity, I tried mapping all ports to https (by changing the two ProxyPass lines from http to https (and removing the :80 from VH), and that didn't work either. How would you do that as well? Any thoughts? Thanks in advance.

    Read the article

  • Windows 7 deployment thru WDS

    - by vn
    Hello, I am deploying new systems on my network and I built my reference computer by installing the OS the manufacturers (Dell and a custom built system from some local business) gave with all drivers, installed all the desired applications. As for the settings part, I'm doing most of it thru GPOs. I want to image my reference computer and deploy it with WDS. i found several links on how to sysprep, but they're all doing it with some differences without explaining them. My questions : How do I manage (into sysprep) the domain join/computer naming part since (from what I understand) WDS manages that? How do I know/determine what I need to setup into my sysprep.xml? Can you sysprep a first time, try and if it fails, do some modifications and try again? I am thinking of doing a basis sysprep, checking what info can be automated and correct that in the answer file. What do I miss if skipping the "audit" mode? I don't plan on re-doing the reference computer... I read that when sysprepping, it resets settings from the reference computer like the computer name, activation/key and such... what setting is sysprep resetting by default that I should be aware of? I must admit I am quite lost about Win7, sysprep, RIS, MDI toolkit, WDS.. I understand the way of doing with XP, but it changed so much with Windows 7! The links I am reading are : http://far2paranoid.wordpress.com/2007/12/05/prep-for-sysprep/ http://blog.brianleejackson.com/sysprep-a-windows-7-machine-%E2%80%93-start-to-finish-v2 http://www.ehow.com/print/how_5392616_sysprep-machine-start-finish-v2.html Thank you VERY much for any answers, they are much appreciated.

    Read the article

  • Server Names Inside Private Network

    - by thyandrecardoso
    Our office has a private network, where any requests on a (pre-determined) public IP are forwarded to a private IP inside said network. On that private IP, we've got a server running several services, including HTTP servers, and SCM systems. We only control our private network, having no control on the public IP configuration. We bought a domain name, and pointed it to that public IP, so people can access our services from the outside. But, when inside the office, people can't use that DNS name, because the server and any other hosts inside the network share the same public IP! For desktops, inside the office network, dealing with names is really easy: one entry on the hosts file and we're done. However, for laptops, that keep going in and out, and need to access services inside the office, the naming is really annoying. I don't know the "standard" process for dealing with these kind of situations. I've considered installing BIND in the office, and make people configure their wireless and wired connections to use that DNS server. What is the correct approach in this situation? If using BIND (or any other DNS server) is the answer, how should I configure it so that people inside the office can use it to get our custom names, and get forwarded to the ISP DNS when trying to reach the internet?

    Read the article

  • Linux udev persistent net rule

    - by Anonymous
    I have a Linux system (Slackware Linux 13.0) with two network interfaces. Let's call them NIC0 and NIC1 My goal is to make NIC0 to appear as eth0 in the system. I know this can be achieved via udev rules that map network aliases to MAC addresses of network interfaces. In Slackware Linux the file /etc/udev/rules.d/70-persistent-net.rules contains such rules. The trickiest part of my problem is that I need to fake the MAC address of NIC0. I know I can dynamically change the MAC addres of a network interface with the command: ifconfig eth0 hw ether <new MAC address> Do you see the problem? This supposes that the network interfaces are already set up. So my question is: If I would have an udev rule for NIC1(the one that shall go up as eth1, with its original MAC address), would it be enough for the system to bring the other network interface (NIC0) as eth0 by default? This way I could change its MAC address later, after the udev machinery completes and the network aliases are brought up.

    Read the article

  • Numeric UIDs/GIDs in ACLs on OS X server (10.6)

    - by Oliver Humpage
    Hi, On one (old OS X 10.4) server I'm tarring up some files which have ACLs. I'm then using ``tar -xp'' to untar the archive onto a new 10.6 server, which doesn't have any users/groups configured on it yet except the default admin (UID 501) (there's a reason for that, don't ask!). Obviously this means an "ls -lne" will list files and ACLs with numeric UIDs and GIDs. Now for the normal file permissions it makes sense: you get UIDs like "1037". And for some ACLs, it also makes sense: you get things like "AAAABBBB-CCCC-DDDD-EEEE-FFFF00000402" for groups (0x402 = GID 1026) and "FFFFEEEE-DDDD-CCCC-BBBB-AAAA000001F5" for users (0x1F5 = UID 501). However, some ACLs have a UIDs like "E51DA674-AE70-41BC-8340-9B06C243A262" or GIDs like "0A3FCD24-0012-46FA-B085-88519E55EF29" and I have absolutely no idea how to translate these IDs back into something that could be matched back to the original IDs (UID 1072 and GID 1047 respectively in this example). Can anyone help me translate these weird long hex strings? (Basically we're moving from local users to an Active Directory setup, so I want to move all files to the new server with permissions intact, then chmod, chgrp and set ACLs such that we translate old IDs to the new AD IDs. Hence needing some way to map between the sets. I don't believe there's an easier way to do this?) Many thanks, Oliver.

    Read the article

< Previous Page | 664 665 666 667 668 669 670 671 672 673 674 675  | Next Page >