Search Results

Search found 20890 results on 836 pages for 'self reference'.

Page 647/836 | < Previous Page | 643 644 645 646 647 648 649 650 651 652 653 654  | Next Page >

  • Laptop shuts down randomly without warning

    - by Robert P.
    My Asus Zenbook UX32V turns off randomly when I'm working on it. This happens both when the computer is recently turned on (5 minutes), and after being on for several days. I'm not running any heavy software The laptop is not heating The fan is not working on the maximum capacity (it's not heating) It happens when the laptop is lying still on the table It is no warning, it simply goes black It happens both when charging and on battery My guess is that it suddenly lose power somehow. What puzzles me is that I can flip the laptop upside down, sideways, shake it, etc. without it shutting off. This makes me think it's not something that's loose causing occasional short-circuits. I realize that the laptop probably doesn't like flipping and shaking, but it was the best way I could troubleshoot. I rarely turn the computer off, only have it in hibernate or sleep mode (most often hibernate). I've never experienced that the laptop is off when I wake it up from sleep mode. I've had the problem for a few months and it happens 2-8 times a week. Specs: Asus Zenbook UX32V Windows 8.1 (it happened in Windows 8.0 too) Intel i5-3317U CPU @ 1.70GHz The laptop is approx 1.5 years, but it has a small dent on one of the sides that probably voids the warranty. The dent has been there since week one and I don't think it's related to the problems I'm having now. Does anyone have a clue what might cause this, and how it might be fixed? I've read all other questions (some of which are listed below) that seem related to my issue, but none report the same behavior as I'm experiencing. Most report heavy games, heating etc. Asus N53J Laptop randomly shuts down Laptop is randomly shutting off Computer shuts down without warning My laptop acer aspire 5720 suddenly turn off randomly Computer randomly shutting down Windows 8.1 randomly shuts self down ASUS K55VM Laptop unexpectedly shuts down

    Read the article

  • Debian, How to convert filesystem from ISO-8859-1 into UTF-8?

    - by Johan
    I have a old pc that is running Debian stable, that is in need of a upgrade. The problem is that it is using latin1 (ISO-8859-1) for everything, and since the rest of the world has moved to UTF-8 I plan to convert this computer as well. And for this question I will focus in on the files that are served with Samba, and some has some latin1 characters in the filenames (like åäö). Now my plan is to move all data of this old computer onto and a brand new one that is running Debian stable (but with UTF-8). Does anybody have a good idea? Thanks Johan Note: later I plan to use iconv to convert the content of some files with something like this: iconv --from-code=ISO-8859-1 --to-code=UTF-8 iso.txt > utf.txt However I don't know of a good way to convert the filesystem it self. Note: Normally I usaly just scp from one computer to the next, but then I end up with latin1 characters in the utf-8 filesystem... Update: Did a small test round with a hand full of files (with funny chars) in the filenames, and that seemed like it could work. convmv -r -f ISO-8859-1 -t UTF-8 * So it was only to execute with the --notest convmv -r -f ISO-8859-1 -t UTF-8 --notest * Nothing more to it.

    Read the article

  • How can one make vim change terminal colors?

    - by amn
    I am using command line vim running from an xterm (which runs sh). I have color in vim according to a color scheme I like. The problem is, as usual with 256-color terminals and truecolor color schemes, colors are wrong. Now, I know I can do a gazillion things to fix this, including installing gvim, but I like my terminal. In fact, using xrdb [-merge] .Xresource file, I now actually have xterm override the color values, and the theme now looks perfect. Since, I may be switching to another theme, I need some workflow to have vim actually do what xrdb does - to reset terminal color pallette. Because right now I have to reset color values with xrdb ... first, then launch another xterm to actually use these values, then launch vim from that newly opened xterm to have the exact colors. The way I understood it is that vim color scheme, just as any other terminal application, uses colors by referencing their ids, and X resources set the values themselves. I think I saw somewhere on Internet, that terminal control character sequences can reset actual color values, in fact, I am sure they can - I managed to set my terminal background color at runtime. How would I make vim execute these sequences to match values for the color scheme? And is there any reference to these control sequences, as part of any standard?

    Read the article

  • Cluster FIle System

    - by Ben
    We are looking for to choose a clustered file system for our in house appplication. Let me first highlight my requirement. we have a storage and 2 servers at present.We get the data files from remote servers to our server and on both servers we are running our application to access those data and make a final result as per our requirements. In future may be after 3-4 months, we can add another servers in current cluster pool to handle more data load from remote location data senders. So my requirement is that to integrate same storage partition on 2-3 servers , it might be 4-5 more servers in future, My application read data from storage partition and write back to storage partition. Is there any bottleneck / limitation from RHCS , GFS2 or anything.? We are new with RHCS + GFS and all. Can we have any other better approach or someway to deal with our requirement light way? what is the best OS version for this ? how's RHEL 6.4 64 bit ? please share some case study or some gudie reference as per past experiences with such environnmnets Regards, Ben

    Read the article

  • zsh auto-complete event designator

    - by simont
    (See my previous question for additional context). I'm migrating to zsh from bash, and using oh-my-zsh. When my zsh history looks something like the following: git status git add -A git commit I want to be able to re-run git add -A. To do that, I could use !?git add, which should: !?str[?] Refer to the most recent command containing str. The trailing ‘?’ is necessary if this reference is to be followed by a modifier or followed by any text that is not to be considered part of str. The link for zsh event designators is here. Unfortunately, I can't do this - as I'm typing !?git add, as I hit the ' ', it auto-completes the command to the most recent command matching git (ie, it auto-completes with git commit). I can't use the event designator properly because of this auto-completion as I hit the space. I assume this is an oh-my-zsh feature. I have no idea where to look, though - greping for 'complet' in the oh-my-zsh source doesn't get me anywhere. My question: how do I turn off this feature? Or, if that's not something that's known, where should I be looking - if I was going to implement this auto-complete when whitespace is entered, where would be a logical place to do so in the oh-my-zsh framework?

    Read the article

  • Options for small windows network setup without dedicated server?

    - by Mitch
    I'm very weak on networking and hope someone can point me in the right direction: I have written some windows client/server software which incorporates a database which is located on a windows server. I have a test installation running at a customer's office where the server has a static IP address. In this case its easy for the clients to access the database because of the fixed IP address. Also, customers with network servers generally have specialist support staff to set up my software, so its not such a problem for me. However I also need to offer the software to customers who have small offices with less than 10 PCs and no dedicated network server. In this case I want the customer to be able to nominate one PC as the database "server" and install my software and have the clients access it. But in this situation I believe the "server" PC may not have a dedicated IP address. Q1: What is the best way to set this up simply and make it work? Can I reliably reference the "server" by using its name, or is there a way to assign dummy fixed IP addresses? Ideally this needs to be workable on small networks running a mixture of XP/Vista/Windows7 as my target market may well have mixed OSes etc. I guess this would be akin to home networking? Many thanks Mitch

    Read the article

  • How can I make XAnalogTV fill my screen?

    - by Breakthrough
    I recently installed xscreensaver, as well as the additional/extra screensavers. Many of the OpenGL ones function correctly, going fullscreen as expected. However, for some reason, the XAnalogTV screensaver leaves two "blank" spots on the edges of my screen. If I manually launch XAnalogTV, it displays a window, which it fills correctly. When I maximize the window, the same effect occurs: the window maximizes, but the two edges of the screen are literally "transparent". This effect also occurs when the screensaver is set to fullscreen. For these reasons, I believe the problem may be related to the aspect ratio of the screen. The edges of the screen are literally "ignored", with nothing being drawn there. Specifically, note the transition between the maximized and full-screen screenshots (with the un-drawn whitespace shrinking as the vertical height has been increased). For reference, I am running Xubuntu 12.04 on a Dell Vostro 1520 (Intel P8600, Nvidia 9300M) with a 1440 x 900 display (16:10). I have also set the GetViewPortIsFullOfLies preference to true. Is there any way to force XAnalogTV to fill my entire screen? Relevant screenshots (windowed, maximized, and full-screen, respectively):

    Read the article

  • Installing Windows 7 over PXE, preferably with domain autojoin

    - by Ivan Vucica
    At an educational non-profit, I've inherited a previously set-up Windows domain that, after the first reinstall of the machines, we ended up not using by simply not joining machines back into the domain. Over last summer, before the annual reinstall for shipping machines to the summer school, I toyed with the idea of installing Windows 7 over network, instead of just imaging the machines. It took a bit longer than I expected to figure out the basics; honestly, I expected that Windows would be more friendly for PXE installation out of the box. What I'm interested in is best practices for installing Windows 7 over PXE with domain autojoin. I'd love it if the whole setup could optionally be hosted on a UNIX based system as well. I've had some success by preparing an ISO using Windows Deployment Kit, and loading the ISO into memory. This was needed since I wanted a menu, and I think I couldn't get PXELINUX to chainload into Windows' bootloader. Unfortunately, I couldn't figure out much about customization of the Windows setup in that timeframe nor could I get Samba to work properly; studying the stuff ended up being too lengthy, especially the portion where I edited a disk image on Windows and copied it outside. WDK didn't make things easier by mounting the disk image into RAM, and writing it in its entirety when done with it, making me a very sad boy. I've recently found a different approach, too, that appears to be closer to Microsoft's original idea for netboot deployment and does not involve ISOs. So my question boils down to the following. What exact approach do you use for netbooting Windows 7 setup? How can Windows 7 setup be best customized to be completely unattended, including installation on specific system partition and not destroying the data partition, creation of passworded admin and default user, choice of MAC-address-based hostname, and joining a domain? As much details as possible for everyone's future reference would be appreciated. WDS isn't a bad choice, but if a Linux-based install can be used, that'd be better.

    Read the article

  • need advice on data center move, communication with both facilities during transition

    - by Brian Roden
    We are beginning the process of moving to a new facility. Office and warehouse operations will both be moving, and we must get shipping operations up and running at the new location while continuing to ship from the old location. Our contract with some third-party warehouse tenants requires two business day turnaround (only weekends and holidays excluded), so we can't have major downtime during the move. We would like to keep our 172.16.60/61.xxx internal address space in use throughout the move. Is it possible to keep using this same internal range, and have our existing WatchGuard Firebox 520 and whatever router we get for the other location (preferably the same model) just treat both locations as one network, leaving our host IPs the same throughout the move? Renumbering the servers when they move isn't a big deal, but our wireless terminals for order picking in the warehouse have fixed IPs (and a fixed IP, non-DNS reference to the host they speak with) and would be a massive undertaking to reconfigure when the servers move (each device would have to be reconfigured at least 2 times -- some when we start using them in the new building and the host is still here, all of them in both locations when the host moves to the new building, and the rest when they finally make the move to the new building). We're trying to avoid that if possible.

    Read the article

  • Sharepoint (WSS 3.0) on SBS 2008 broken.

    - by tcv
    I recently ran the Sharepoint Products and Technologies Wizard. I had hoped this would bring up Sharepoint and allow me to access it so I could begin to learn. But it's not working. Here is some data that I hope is relevant. I am doing all my testing on the SBS 2008 server itself. I changed the hostheader in IIS to reflect an external FQDN I plan to deploy. The SBS server is remote and there are no domain-connected workstations. If I browse "localhost" SSL, I can get to the site, albeit with a self-signed cert warning. If I attempt to connect via SSL using either the internal FQDN (.local), the External FQDN (.net) or any other permutation thereof, I am prompted for credentials three times but am not allowed access. My account is a domain admin. The site is inaccessible using port 80 whether using localhost, internal FQDN (.local), and external FQDN (.net) Right now, I suspect my problem is within IIS, but I don't know. My plan to publish the sharepoint site to the web so my partner and I can check documents in/out. Can someone help me get started in current direction?

    Read the article

  • Remote search system for samba shares

    - by fostandy
    I have several shares residing on a samba server in a small business environment that I would like to provide search facilities for. Ideally this would be something like google desktop with some extra features (see below), but lacking this the idea is to take what I can get, or at least get an idea for what is out there. Using google desktop search as a reference model, the principle additional requirement is that it is usable from clients over the network. In addition there are some other notes (note that none of these are hard requirements) The content is always files, residing on a single server, accessible from samba shares. Standard ms office document fare Also a lot of rars and zips which it is necessary to search inside. Permissions support, allowing for user-based control to reflect current permission access in samba shares. The userbase will remain fairly static, so manual management of users is fine. majority of users will be Windows based I know there are plenty of search indexers out there: beagle and tracker seem to be the most popular. Most do not seem to offer access control and web-based/remote search does not seem to be high priority. I've also seen a recent post on the samba mailing list asking for pretty much the exact same thing. (They mention a product called IBM OmniFind Yahoo! Edition and while their initial reception seems positive, I am pretty skeptical. RHEL 4? Firefox 2? Updated much?) edit: similar question here What else is out there? Are you in a similar situation? What do you use?

    Read the article

  • virtualbox instances dedicated-server with custom dnsmasq

    - by ovanes
    I have dedicated server where I planned to run virtualbox virtual machines. Since the VMs are managed with vagrant/chef I may end up with many different ones. I thought it would be a great idea to deploy a dnsmasq on the server, which is going to dynamically assign the ip addresses to the VMs. Since each Vagrant/Chef recipe is configured to set the VM's host name I can find/reference the appropriate VM by the host name. Finally, the entire infrastructure is not directly accessible via internet, so the dedicated Server is the OpenVPN host. So the entire infrastructure may be seen as: +-------------------------------------+ | Dedicated Server | | | | +-------------+ +------------+ | +------------------+ | | DNSMasq | | OpenVPN |<==========>| Client | | +-------------+ +------------+ | | | | ^ ^ | +------------------+ | | | | | +--+ | | | | +-------+ | | | | VM1 | | | | +-------+ | | | ... | | | +-------+ | | +-| VM2 | | | +-------+ | +-------------------------------------+ Now some questions which I am struggling with: Are there any other suggestions to access private infrastructure, because I don't want to reinvent the wheel. On the Dedicated Server I don't see the vboxnet0 interface but VirtualBox is installed without GUI. Accessing of virtual boxes via ssh works fine. Did I miss smth? DNSMasq must serve the local VMs only, otherwise there is a chance that local DNSMasq start to serve other server's on the network, what I don't want. Because I don't see vboxnet0 I tend to use no-dhcp-interface=eth0 config option. Are there any thoughts on that despite, the fact that a second NW-card (which is not the case), might start serving DHCP-Requests? How should I config the VM's network interface that I am able to access it via OpenVPN and resolve the hostnames using the DNSMasq. I think it should be the host-only network card. Should I do bridging in the OpenVPN config or is it sufficient to use routing.

    Read the article

  • WS2008 subst in Logon script does not "stick"

    - by Frans
    I have a terminal server environment exclusively with Windows Server 2008. My problem is that I need to "map" a drive letter to each users Temp folder. This is due to a legacy app that requries a separate Temp folder for each user but which does not understand %temp%. So, just add "subst t: %temp%" to the logon script, right? The problem is that, even though the command runs, the subst doesn't "stick" and the user doesn't get a T: drive. Here is what I have tried; The simplest version: 'Mapping a temp drive Set WinShell = WScript.CreateObject("WScript.Shell") WinShell.Run "subst T: %temp%", 2, True That didn't work, so tried this for more debug information: 'Mapping a temp drive Set WinShell = WScript.CreateObject("WScript.Shell") Set procEnv = WinShell.Environment("Process") wscript.echo(procEnv("TEMP")) tempDir = procEnv("TEMP") WinShell.Run "subst T: " & tempDir, 3, True This shows me the correct temp path when the user logs in - but still no T: Drive. Decided to resort to brute force and put this in my login script: 'Mapping a temp drive Set WinShell = WScript.CreateObject("WScript.Shell") WinShell.Run "\\domain\sysvol\esl.hosted\scripts\tempdir.cmd", 3, True where \domain\sysvol\esl.hosted\scripts\tempdir.cmd has this content: echo on subst t: %temp% pause When I log in with the above then the command window opens up and I can see the subst command being executed correctly, with the correct path. But still no T: drive. I have tried running all of the above scripts outside of a login script and they always work perfectly - this problem only occurs when doing it from inside a login script. I found a passing reference on an MSFN forum about a similar problem when the user is already logged on to another machine - but I have this problem even without being logged on to another machine. Any suggestion on how to overcome this will be much appreciated.

    Read the article

  • Upgrading MySQL Connector/Net

    - by Todd Grover
    I am trying to publish a website with our hosting provider. I am getting error due to the fact that they only allow a medium trust and the MySQL Connector/Net that I am using requires reflection to work. Unfortunately, reflection is not allowed in a medium trust. After some research I found out that the newest version of the MySQL Connector/Net may solve this problem. Connector/Net 6.6 includes enhancements to partial trust support to allow hosting services to deploy applications without installing the Connector/Net library in the GAC. I am thinking that will solve my problem. So, I unistalled MySQL Connector/Net 6.4.4 and I installed MySQL Connector/Net 6.6.4. When I run the application in Visual Studio 2010 I get the error: ProviderIncompatibleException was unhandled by user code The message is An error occurred while getting provider information from the database. This can be caused by Entity Framework using an incorrect connection string. Check the inner exceptions for details and ensure that the connection string is correct. InnerException is The provider did not return a ProviderManifestToken string. Everything works fine when I have Connector/Net 6.4.4 installed. I can access the database and perform Read/Write/Delete action against it. I have a reference to the following in the project: MySql.Data MySql.Data.Entity MySql.Web My connection string in Web.config <connectionStrings> <add name="AESSmartEntities" connectionString="server=ec2-xxx-xx-xxx-xx.compute-1.amazonaws.com; user=root; database=nunya; port=3306; password=xxxxxxx;" providerName="MySql.Data.MySqlClient" /> </connectionStrings> What might I be doing wrong? Do I need any additional setting(s) to work with version 6.6.4 that wasn't required in the older version 6.4.4?

    Read the article

  • Dovecot authentification not working

    - by user1488723
    I run a Ubuntu 10.04 VPS with Postfix and Dovecot installed. For a while I had problems with the mailserver itself (Postfix) but now it runs ok. I can telnet into it from localhost (telnet localhost 25 while logged in) and Im blocked if I try to do it from the outside (telnet mail.example.org 25). This is as it should be according to my main.cf However when I try to log in using Dovecot (openssl s_client -connect mail.example.com:993) I'm allowed in but denied when trying to identify myself as a user: Excerpt from Dovecot log in: Key-Arg : None Start Time: 1341074622 Timeout : 300 (sec) Verify return code: 18 (self signed certificate) OK [CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID ENABLE AUTH=PLAIN AUTH=LOGIN] Dovecot ready. When I continue and try to log in to a specific user with the command: A001 login user password I get: A001 NO [AUTHENTICATIONFAILED] Authentication failed. I've reset the password to ensure it is correct and I know the user (user) exists on the system. When I do /etc/init.d/dovecot reload I get: /etc/init.d/dovecot: 29: maildir:~/Maildir: not found * Reloading IMAP/POP3 mail server dovecot [ OK ] Could it be that the mailboxes isn't found? Postfix main.cf: home_mailbox = Maildir/ mailbox_command = recipient_delimiter = + inet_interfaces = all smtpd_use_tls = yes smtpd_tls_auth_only = no smtpd_tls_loglevel = 1 smtpd_tls_cert_file = /etc/postfix/ssl/smtpd.crt smtpd_tls_key_file = /etc/postfix/ssl/smtpd.key smtpd_tls_CAfile = /etc/postfix/ssl/cacert.pem smtpd_sasl_auth_enable = yes smtpd_client_restrictions = permit_sasl_authenticated, permit_mynetworks, reject_unauth_destination smtpd_sender_restrictions = permit_sasl_authenticated, permit_mynetworks smtpd_recipient_restrictions = permit_mynetworks, permit_sasl_authenticated, reject_unauth_destination broken_sasl_auth_clients = yes smtpd_sasl_type = dovecot smtpd_sasl_path = private/auth smtpd_sasl_security_options = noanonymous smtpd_sasl_local_domain = $mydomain Dovecot.conf: protocols = imap imaps disable_plaintext_auth = no log_timestamp = "%b %d %H:%M:%S " ssl = yes ssl_cert_file = /etc/postfix/ssl/smtpd.crt ssl_key_file = /etc/postfix/ssl/smtpd.key mail_location = maildir:~/Maildir auth_verbose = yes mail_access_groups = mail auth_username_chars = abcdefghijklmnopqrstuvwxyz0123456789 protocol imap { imap_client_workarounds = delay-newmail tb-extra-mailbox-sep } auth default { mechanisms = plain login passdb pam { } userdb passwd { } socket listen { client { path = /var/spool/postfix/private/auth user = postfix group = postfix mode = 0660 } } }

    Read the article

  • Solaris 10 invalid ARP requests from 0.0.0.0? Link up/down every hour or 2

    - by JWD
    The guys at the data center where I'm hosting a server running Solaris 10 are telling me that my server is making a lot of invalid arp requests. This is an example of a portion of what was sent to me from the logs (with Mac addresses and IP addresses changed). [mymacaddress]/0.0.0.0/0000.0000.0000/[myipaddress]/[Datestamp]) It's being logged every hour. I don't see anything in the arp tables (arp -a) or routing tables (netstat -r) and I don't see anything relating to 0.0.0.0 when snoping the arp requests. The only place I see any reference to 0.0.0.0 is if I do netstat -a for the SCTP SCTP: Local Address Remote Address Swind Send-Q Rwind Recv-Q StrsI/O State ------------------------------- ------------------------------- ------ ------ ------ ------ ------- ----------- 0.0.0.0 0.0.0.0 0 0 102400 0 32/32 CLOSED But not really sure what that means. Doesn't seem like I can disable SCTP. There are some tunable SCTP parameters but it's not something I'm familiar with. Do I have to add changes to /etc/system? Looks like sctp_heartbeat_interval might be what I need to change? If it makes any difference, I have a few solaris zones running on this server, each with their own IP address on a virtual interface. eth0:0, eth0:1, etc. Does anyone have any idea what might be causing this and how to stop it? I think the switch I'm connected to doesn't like it and momentarily drops the connection. Is there anyway to at least block those requests using ipfilter or something else? Update: This was happening more frequently but now it seems to be happening roughly every hour or every two hours. It's not consistent. I tried setting setting the link speed and duplex to match the switch port and that seemed to make it stop happening for a few hours but then it started again.

    Read the article

  • Permission problem with Git (over SSH) on FreeBSD

    - by vpetersson
    We're having permission problem with Git on FreeBSD. The setup is fairly straight forward. We have a few different repos on the same server. For simplicity, let's say they reside in /git/repo1 and /git/repo2. Each repo is owned by the user 'git' and a self-titled group (eg. repo1). The repo is configured with g+rwX access. Every user who commits to the repository is also member of the group for the repo (eg. repo1). The Git repositories all have 'sharedRepository = group' set. So far so good, all users can check out the code from the repositories, and the first user can commit without any problem. However, when the next user tries to commit to the repositories, he will receive a permission error. We've been banging our heads with this issue for some time now, and the only way we've managed to resolve it is by running the following script between commits (which is obviously very inconvenient): find /git/repo1 -type d -exec chmod g+s {} \; chmod -R g+rwX /git/repo1 chown -R git:repo1 /git/repo1/ cd /git/repo1 git gc Anyone got a clue to where the problem lies?

    Read the article

  • Installing ffmpeg + dependencies on AWS Linux AMI (repo issues)

    - by HdN8
    I'm installing ffmpeg to run on an Amazon linux AMI, and have added the rpmforge repo and the dag repo. Here are some guidelines I'm using for reference: TWoZaO and Razuna The rpmforge repo has ffmpeg, but if you try to install it then it will complain that is missing dependencies (for me libSDL-1.2.so.0()(64bit)). Regardless I will install ffmpeg from svn so I can be sure to enable the options I want (namelylibx264). It seems strange to me though that SDL is not inrpmforgeordag`, and in according to both of my references above, it should be there. I tried to grab it manually from here, but it needs these dependencies, so no-go: > error: Failed dependencies: SDL = > 1.2.10-8.el5 is needed by SDL-devel-1.2.10-8.el5.x86_64 > alsa-lib-devel is needed by > SDL-devel-1.2.10-8.el5.x86_64 > libGL-devel is needed by > SDL-devel-1.2.10-8.el5.x86_64 > libGLU-devel is needed by > SDL-devel-1.2.10-8.el5.x86_64 > libSDL-1.2.so.0()(64bit) is needed by > SDL-devel-1.2.10-8.el5.x86_64 > libX11-devel is needed by > SDL-devel-1.2.10-8.el5.x86_64 > libXext-devel is needed by > SDL-devel-1.2.10-8.el5.x86_64 > libXrandr-devel is needed by > SDL-devel-1.2.10-8.el5.x86_64 > libXrender-devel is needed by > SDL-devel-1.2.10-8.el5.x86_64 > libXt-devel is needed by > SDL-devel-1.2.10-8.el5.x86_64

    Read the article

  • Problems compiling coreutils-8.5 on Solaris 5.10 on Intel platform

    - by PP
    I am having trouble compiling coreutils-8.5 on Solaris 5.10 on the Intel platform using cc. Firstly I had the following error during ./configure: checking whether <wchar.h> uses 'inline' correctly... no configure: error: <wchar.h> cannot be used with this compiler (/tool/sunstudio12.1/bin/cc -xc99=all -g -D_REENTRANT). This seemed similar to the problem in this question. The solution was to edit configure and replace the reference of -xc99=all to -xc99=all,no_lib. This permitted the configure to complete. Then I ran /usr/sfw/bin/gmake and it progressed until I received the following message: Making all in src gmake[2]: Entering directory `/home/peterp/src/coreutils-8.5/src' gmake all-am gmake[3]: Entering directory `/home/peterp/src/coreutils-8.5/src' CCLD chroot Undefined first referenced symbol in file eaccess ../lib/libcoreutils.a(euidaccess.o) ld: fatal: Symbol referencing errors. No output written to chroot What could cause this problem? PS I was only compiling coreutils because I wanted colour ls.

    Read the article

  • VPN Setup: Mac OS X and SonicWall

    - by noloader
    I'm trying to get VPN access up and running. The company has a SonicWall firewall/concentrator and I'm working on a Mac. I'm not sure of the SonicWall's hardware or software level. My MacBook Pro is OS X 10.8, x64, fully patched. The Mac Networking applet claims the remote server is not responding. The connection attempt subsequently fails: This is utter bullshit, as a Wireshark trace shows the Protected Mode negotiation, and then the fallback to Quick Mode: I have two questions (1) does Mac OS X VPN work in real life? (2) Are there any trustworthy (non-Apple) tools to test and diagnose the connection problem (Wireshark is a cannon and I have to interpret the results)? And a third question (off topic): what is broken in Cupertino such that so much broken software gets past their QA department? EDIT (12/14/2012): The network guy sent me "VPN Configuration Guide" (Equinox document SonicOS_Standard-6-EN). It seems an IPSec VPN now requires a Firewall Unique Identifier. Just to be sure, I revisited RFC 2409, where Main Mode, Aggressive Mode, and Quick Mode are discussed. I cannot find a reference to Firewall Unique Identifier. I think I am screwed here: I am trying to connect to a broken (non-standard) firewall, with a broken Mac OS X client. Fortunately, I can purchase VPN Tracker Personal (a {SonicWall|Equinox}-authored client) for $129US from Equinox. So much for standards....

    Read the article

  • Setting cfengine3 class based on command output

    - by gnomie
    This question is very similar to How can I use the output of a command in cfengine3 but the answer does not apply in my case I believe. I want to update a git repository via "git pull" and based on whether that lead to changes trigger some follow up action. Simplified, if there was something like "match output and set class" via some body if_output_matches I would want to use something like this: bundle agent updateRepo { commands: "/usr/bin/git pull" contain => setuidgiddir_sh("$(globals.user)","$(globals.group)","$(target)"), classes => if_output_matches("Already up-to-date.","no_update"); reports: no_update:: "nothing updated"; } body contain setuidgiddir_sh(owner,group,folder) { exec_owner => "$(owner)"; exec_group => "$(group)"; useshell => "true"; chdir => "$(folder)"; } So, is it possible to use the output of a - possibly expensive command - and base some decision on that? The execresult function is no good choice for me as a) the pull may become expensive at times (not recommended following the cfengine3 reference) and b) does not allow to specify user, group, working dir - which is important in my case. The repository is in user space and not owned by root.

    Read the article

  • Excel concatenate strings from cells listed in third cell

    - by Puddingfox
    I have an excel 2007 workbook that has five columns: A. A list of machines B. A list of service numbers for each machine C. A list of service names for each machine ...(nothing here) I. A list of Service Numbers J. A list of Service Names Each machine listed in column A has one or more services running on it from the list in column J. I would like to be able to add services to a machine (i.e. updating the cell in Column C) by simply adding another comma-separated number to Column B. For Example, The first row would look like this assuming Machine1 has the first three services: | A | B | C | Machine1 | 1,2,3 | HTTP,HTTPS,DNS Right now I have to manually update the formula in column c for each change I make. The current formula is: =CONCATENATE(J1,",",J2,",",J3) I would like to use something like this (please forgive my syntax; I'm a coder and I'm treating cell B1 as if it is an indexed array): =CONCATENATE(CELL("J"+B1[0] , "," , "J"+B1[1] , "," "J"+B1[2]) Although having variable numbers of services makes this even more difficult. Is there any way of doing this. For reference, this is columns I and J: | I | J | 1 |HTTP | 2 |HTTPS | 3 |DNS ..... | 16 |Service16 I don't know very much about Excel so any help is greatly appreciated.

    Read the article

  • Radeon 6950 - Garbling of text and graphics in certain Windows only

    - by Greg
    This morning I noticed the text in Gmail (in Firefox 4) looked a little funny (kind of thin, maybe some color fringing). I went to work and thought it might be some ClearType issue or something with the way Direct way that FF4 draws to the screen. When I came back from work (I left the computer on), the problem was much worse - way beyond ClearType nit-picking. The text was barely readable. I opened Chrome and there was no such problem. It seems like only Windows that use hardware acceleration are garbled, and ones that use GDI are not. But, I fired up Dragon Age and didn't notice any problems (I only really looked at the main menu though). Here is a link to a screen shot that illustrates the problem. Notice how the Windows Live Mesh window is completely unreadable, the text in Firefox 4 (left) is pretty bad, while Chrome, the Windows Control Panel, and the task bar are perfectly fine. The fact that the problem shows up in screen shots and that it only happens in certain Windows makes me confident that the problem cannot be with the monitor or DVI cable. I am using the AMD Radeon drivers from 4/27/11. The card I have (MSI Frozr II) came with a slight overclock (810Mhz) out of the box, but it looks like when I'm on the Windows desktop it's not running at full clock (CCC reports 450Mhz). Still, I underclocked it to the stock reference clock (800Mhz) and it made no difference. The idle temperature according to Afterburner is 42-44 Celsius, which seems a tad high but not enough to cause a problem - it's cold to the touch if I open up the machine. What the heck could be causing this? The problem varies in intensity. As we speak I'm in Firefox and things look better than they did earlier - it'll probably get worse again soon. Radeon 6950 (MSI Frozr II), Seasonic X 560, Core i5 2500K at stock clockspeeds, 16GB RAM, Asus P8P67 M Pro

    Read the article

  • Visual Studio 2010 won't compile/create new projects

    - by tuner
    My Visual Studio 2010 Professional with SP1 installed won't compile anymore. The shown error is: TRACKER : error TRK0005: Failed to locate: "CL.exe". The system cannot find the file specified. Strangely it is also not possible anymore to create new projects - the wizard appears but just restarts when I press create. As I found out the paths for Visual Studio are now built from settings in the registry. Namely HKEY_CURRENT_USER\Software\Microsoft\VisualStudio. Comparing a colleagues installation with mine revealed no different settings. So this is how the Property Pages/Configuration Properties/VC++ Directories look like: Executable Directories: $(ExecutablePath) Include Directories: $(IncludePath) Reference Directories: $(ReferencePath) Library Directories: $(LibraryPath) Source Directories: $(SourcePath) Exclude Directories: $(ExcludePath) From the Visual Studio 2010 Command Prompt, cl.exe is found. I can only guess that this behavior was caused by a reinstallation of Studio a couple of months ago (to a different folder). As we use an external build-script for our main project there is a good chance that it is broken since then. Any hints?

    Read the article

  • OpenLDAP ACLs are not working

    - by Dr I
    First things first, I'm currently working with an OpenLDAP: slapd 2.4.36 on a Fedora release 19 (Schrödinger’s Cat). I've just install the openldap with yum and my configuration is the following one: ##### OpenLDAP Default configuration ##### # ##### OpenLDAP CORE CONFIGURATION ##### include /etc/openldap/schema/core.schema include /etc/openldap/schema/cosine.schema include /etc/openldap/schema/inetorgperson.schema include /etc/openldap/schema/nis.schema pidfile /var/lib/ldap/slapd.pid loglevel trace ##### Default Schema ##### database mdb directory /var/lib/ldap/ maxsize 1073741824 suffix "dc=domain,dc=tld" rootdn "cn=root,dc=domain,dc=tld" rootpw {SSHA}SECRETP@SSWORD ##### Default ACL ##### access to attrs=userpassword by self write by group.exact="cn=administrators,ou=builtin,ou=groups,dc=domain,dc=tld" write by anonymous auth by * none I launch my OpenLDAP service using: /usr/sbin/slapd -u ldap -h ldapi:/// ldap:/// -f /etc/openldap/slapd.conf As you can see it's a pretty simple ACL which aim to allow access to the userPassword attribute to a specific group read only, then to the owner read and write to anonymous requiring auth and refuse the access to everyone else. The problem is: Even using a valid user with correct password my ldapsearch ends with zero informations retrieved from the directory, plus I've got a strange response on the result line. # search result search: 2 result: 32 No such object # numResponses: 1 here is the ldapsearch request: ldapsearch -H ldap.domain.tld -W -b dc=domain,dc=tld -s sub -D cn=user,ou=service,ou=employees,ou=users,dc=domain,dc=tld I did not specify any filter as I want to check that ldapsearch is correctly printing only allowed attribute.

    Read the article

< Previous Page | 643 644 645 646 647 648 649 650 651 652 653 654  | Next Page >