Search Results

Search found 79383 results on 3176 pages for 'type system'.

Page 1237/3176 | < Previous Page | 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244  | Next Page >

  • MySQL permission errors

    - by dotancohen
    It seems that on a Ubuntu 14.04 machine the user mysql cannot access anything. It is not writing logs nor reading files. Witness: - bruno():mysql$ cat /etc/passwd | grep mysql mysql:x:116:127:MySQL Server,,,:/nonexistent:/bin/false - bruno():mysql$ sudo mysql_install_db Installing MySQL system tables... 140818 18:16:50 [ERROR] Can't read from messagefile '/usr/share/mysql/english/errmsg.sys' 140818 18:16:50 [ERROR] Aborting 140818 18:16:50 [Note] Installation of system tables failed! Examine the logs in /var/lib/mysql for more information. ...boilerplate trimmed... - bruno():mysql$ ls -la /usr/share/mysql/english/errmsg.sys -rw-r--r-- 1 root root 59535 Jul 29 13:40 /usr/share/mysql/english/errmsg.sys - bruno():mysql$ wc -l /usr/share/mysql/english/errmsg.sys 16 /usr/share/mysql/english/errmsg.sys Here we have seen that mysql cannot read /usr/share/mysql/english/errmsg.sys even though the permissions are open to read it, and in fact the regular login user can read the file (with wc). Additionally, MySQL is not writing any logs: - bruno():mysql$ ls -la /var/log/mysql total 8 drwxr-s--- 2 mysql adm 4096 Aug 18 16:10 . drwxrwxr-x 18 root syslog 4096 Aug 18 16:10 .. What might cause this user to not be able to access anything? What can I do about it?

    Read the article

  • Installed Paragon HFS+ for Windows 8, now my pc won't recognize the external firewire drive

    - by Steve
    I'm not incredibly knowledgeable about computers and I really need some help. Just got a Seagate external firewire drive this morning. I downloaded the necessary pc driver (Paragon HFS+ for Windows 8) through their website per the instructions that came with the drive. After installation, I restarted and the pc recognized the firewire drive just fine. About three hours into copying files from my pc to the firewire drive, it gave me an error and told me the files couldn't be copied. When I clicked to get out of the message, the computer crashed. After an hour of it trying to repair itself in safe mode, it restored me to an earlier version before the system crashed. Here's my current dilemma: The Paragon HFS+ is still showing up in my programs as installed, but the Device Manager is not recognizing the drive. When I try to uninstall and reinstall Paragon, it interrupts me with a message saying "The setup must update files or services that cannot be updated while the system is running" and basically gives me the finger. I have no idea what to do now, as it won't let me uninstall and reinstall Paragon, and I have no idea why it crashed my computer in the first place. Is there possibly another Mac - PC firewire driver I can try downloading instead? I really don't know what I'm doing and any help would be greatly appreciated.

    Read the article

  • auth.log is empty (Ubuntu)

    - by Vinicius Pinto
    The /var/log/auth.log file in my Ubuntu 9.04 is empty. syslogd is running and /etc/syslog.conf content is as follows. Any help? Thanks. # /etc/syslog.conf Configuration file for syslogd. # # For more information see syslog.conf(5) # manpage. # # First some standard logfiles. Log by facility. # auth,authpriv.* /var/log/auth.log *.*;auth,authpriv.none -/var/log/syslog #cron.* /var/log/cron.log daemon.* -/var/log/daemon.log kern.* -/var/log/kern.log lpr.* -/var/log/lpr.log mail.* -/var/log/mail.log user.* -/var/log/user.log # # Logging for the mail system. Split it up so that # it is easy to write scripts to parse these files. # mail.info -/var/log/mail.info mail.warning -/var/log/mail.warn mail.err /var/log/mail.err # Logging for INN news system # news.crit /var/log/news/news.crit news.err /var/log/news/news.err news.notice -/var/log/news/news.notice # # Some `catch-all' logfiles. # *.=debug;\ auth,authpriv.none;\ news.none;mail.none -/var/log/debug *.=info;*.=notice;*.=warning;\ auth,authpriv.none;\ cron,daemon.none;\ mail,news.none -/var/log/messages # # Emergencies are sent to everybody logged in. # *.emerg * # # I like to have messages displayed on the console, but only on a virtual # console I usually leave idle. # #daemon,mail.*;\ # news.=crit;news.=err;news.=notice;\ # *.=debug;*.=info;\ # *.=notice;*.=warning /dev/tty8 # The named pipe /dev/xconsole is for the `xconsole' utility. To use it, # you must invoke `xconsole' with the `-file' option: # # $ xconsole -file /dev/xconsole [...] # # NOTE: adjust the list below, or you'll go crazy if you have a reasonably # busy site.. # daemon.*;mail.*;\ news.err;\ *.=debug;*.=info;\ *.=notice;*.=warning |/dev/xconsole

    Read the article

  • Time not propagating to machines on Windows domain

    - by rbeier
    We have a two-domain Active Directory forest: ourcompany.com at the root, and prod.ourcompany.com for production servers. Time is propagating properly through the root domain, but servers in the child domain are unable to sync via NTP. So the time on these servers is starting to drift, since they're relying only on the hardware clock. WHen I type "net time" on one of the production servers, I get the following error: Could not locate a time-server. More help is available by typing NET HELPMSG 3912. When I type "w32tm /resync", i get the following: Sending resync command to local computer The computer did not resync because no time data was available. "w32tm /query /source" shows the following: Free-running System Clock We have three domain controllers in the prod.ourcompany.com subdomain (overkill, but the result of a migration - we haven't gotten rid of one of the old ones yet.) To complicate matters, the domain controllers are all virtualized, running on two different physical hosts. But the time on the domain controllers themselves is accurate - the servers that aren't DCs are the ones having problems. Two of the DCs are running Server 2003, including the PDC emulator. The third DC is running Server 2008. (I could move the PDC emulator role to the 2008 machine if that would help.) The non-DC servers are all running Server 2008. All other Active Directory functionality works fine in the production domain - we're only seeing problems with NTP. I can manually sync each machine to the time source (the PDC emulator) by doing the following: net time \\dc1.prod.ourcompany.com /set /y But this is just a one-off, and it doesn't cause automated time syncing to start working. I guess I could create a scheduled task which runs the above command periodically, but I'm hoping there's a better way. Does anyone have any ideas as to why this isn't working, and what we can do to fix it? Thanks for your help, Richard

    Read the article

  • How to mount vfat drive on Linux with ownership other than root?

    - by Norman Ramsey
    I'm running into trouble mounting an iPod on a newly upgraded Debian Squeeze. I suspect either a protocol has changed or I've tickled a bug, which I don't know where to report. I'm trying to mount the iPod so that I have permission to read and write it. But my efforts come to nothing: $ sudo mount -v -t vfat -o uid=32074,gid=6202 /dev/sde2 /mnt /dev/sde2 on /mnt type vfat (rw,uid=32074,gid=6202) $ ls -l /mnt total 80 drwxr-xr-x 2 root root 16384 Jan 1 2000 Calendars drwxr-xr-x 2 root root 16384 Jan 1 2000 Contacts drwxr-xr-x 2 root root 16384 Jan 1 2000 Notes drwxr-xr-x 3 root root 16384 Jun 23 2007 Photos drwxr-xr-x 6 root root 16384 Jun 19 2007 iPod_Control $ sudo umount /mnt $ sudo mount -v -t vfat -o uid=nr,gid=nr /dev/sde2 /mnt /dev/sde2 on /mnt type vfat (rw,uid=32074,gid=6202) $ ls -l /mnt total 80 drwxr-xr-x 2 root root 16384 Jan 1 2000 Calendars drwxr-xr-x 2 root root 16384 Jan 1 2000 Contacts drwxr-xr-x 2 root root 16384 Jan 1 2000 Notes drwxr-xr-x 3 root root 16384 Jun 23 2007 Photos drwxr-xr-x 6 root root 16384 Jun 19 2007 iPod_Control As you see, I've tried both symbolic and numberic IDs, but the files persist in being owned by root (and only writable by root). The IDs are really mine; I've had the UID since 1993. $ id uid=32074(nr) gid=6202(nr) groups=6202(nr),0(root),2(bin),4(adm),... I've put an strace at http://pastebin.com/Xue2u9FZ, and the mount(2) call looks good: mount("/dev/sde2", "/mnt", "vfat", MS_MGC_VAL, "uid=32074,gid=6202") = 0 Finally, here's my kernel version from uname -a: Linux homedog 2.6.32-5-686 #1 SMP Mon Jun 13 04:13:06 UTC 2011 i686 GNU/Linux Does anyone know if I should be doing something different, or If there is a workaround, or If this is a bug, where it should be reported?

    Read the article

  • Issue running 32-bit executable on 64-bit Windows

    - by David Murdoch
    I'm using wkhtmltopdf to convert HTML web pages to PDFs. This works perfectly on my 32-bit dev server [unfortunately, I can't ship my machine :-p ]. However, when I deploy to the web application's 64-bit server the following errors are displayed: (running from cmd.exe) C:\>wkhtmltopdf http://www.google.com google.pdf Loading pages (1/5) QFontEngine::loadEngine: GetTextMetrics failed () ] 10% QFontEngineWin: GetTextMetrics failed () QFontEngineWin: GetTextMetrics failed () QFontEngine::loadEngine: GetTextMetrics failed () QFontEngineWin: GetTextMetrics failed () QFontEngineWin: GetTextMetrics failed () QFontEngineWin: GetTextMetrics failed () QFontEngine::loadEngine: GetTextMetrics failed () ] 36% QFontEngineWin: GetTextMetrics failed () QFontEngineWin: GetTextMetrics failed () // ...etc.... and the PDF is created and saved... just WITHOUT text. All form-fields, images, borders, tables, divs, spans, ps, etc are rendered accurately...just void of any text at all. Server information: Windows edition: Windows Server Standard Service Pack 2 Processor: Intel Xeon E5410 @ 2.33GHz 2.33 GHz Memory: 8.00 GB System type: 64-bit Operating System Can anyone give me a clue as to what is happening and how I can fix this? Also, I wasn't sure what to tag/title this question with...so if you can think of better tags/title comment them or edit the question. :-)

    Read the article

  • Server appears to have lost ability to read PHP files with LAMP

    - by OtagoHarbour
    I have LAMP installed on a PC that is running Ubuntu 11.10. LAMP was running fine but I had to restart the PC because Unity was messing up (as it often does) and the tool bar had disappeared. When it started up, I was unable to fire up any php files. I have a file index.php in /var/www. It is owned by www-data as is the directory that it is in. The LAN address of the server is 192.168.1.10. However when I type 192.168.1.10 into the URL box on Mozilla FireFox, I get Unable to connect Firefox can't establish a connection to the server at 192.168.1.10 This server is connected to another server on the LAN that has the LAN address 192.168.1.4. When I type 192.168.1.4 into the Firefow URL box on 192.168.1.10, I see the display associated with index.php on 192.168.1.4. Why can it not display its own /var/www/index.php? Any assistance with this would be greatly appreciated, Peter.

    Read the article

  • Windows 7 PC cannot see some LAN PCs, but can access them via path

    - by zoot
    In an office LAN, with Windows 7 Professional workstations and a FreeNAS Samba server, 2 workstations have intermittent problems in browsing for the other workstations, as well as the FreeNAS server. However, so far, it appears that typing in the path to any of the workstations which aren't visible via the "browse" function, works. ie. the machine Workstation7 is not visible while browsing via Windows Explorer, but is accessible if I type \\Workstation7 in the path field. Occasionally the workstations exhibiting these symptoms show errors that their connection to the FreeNAS server has failed and only rebooting resolves the issue. All other workstations on the network use identical Windows 7 Professional installations and never have these problems. I've checked all machines and they're not using Home Groups. All are setup on the same WorkGroup as the FreeNAS server and the network type is set to Work Network. Temporarily disabling the firewall on the workstations with the issue made no difference, so I know this has nothing to do with the firewall settings. Any pointers would be appreciated. Thanks.

    Read the article

  • Login to OS X Server User Account from Local Computer

    - by Brod Wilkinson
    I have OS X Server installed on a mac mini. I've created several User accounts, one of which is Account Name: Bob Password: abc123 From the Mac Mini's login screen I can choose "Server" (main account) "Bob" (Bobs account) and "Other..." OS X Server Accounts, from "Other..." if I input Bobs credentials it will log me in. I also have a macbook air, I would like to be able to select from the Login Screen "Other..." input Bobs credentials and have it login to Bobs account, or any other User Account for that matter. My Server is setup as private with the server address: server.network.private Following some googled instructions as well as apples very own instructions I have: Setup an Open Directory with Username: diradmin Password: abc123 Then on the macbook air gone into System Preferences > Users & Groups > Login Options and clicked Join next to Network Account Server, input my server (server.network.private) with diradmin credentials and its connected. Great. I've also ticked Allow Network Users to Login and Login Window and selected All Users. I was assuming this would allow my macbook air to login to the "Bob" account by selecting "Other..." from the login window although there is no "Other..." option. I then setup a VPN, basic credentials, logged into it on the macbook air and still not much has changed. I am able to share screens with the "Bob" account form my macbook air by logging in by clicking Share Screen... from the Finder under Shared > Network Server and then clicking Login In but this obviously requires the macbook air to already be logged into an account before it can share screens which is not suitable. Is there any way to simply login to the OS X Server User Account from the macbook air's login screen via the "Other..." like it does on the mac mini's login screen? Thanks in advance. Operating System: OS X 10.9 Mavericks OS X Server: Version 3

    Read the article

  • Erase just the free space on my hard drive

    - by Patriot
    I'm about to give away an older computer with just the Windows XP operating system intact and all other programs uninstalled. However, upon peeking at the "free space" with software called "Recuva", I notice lots of deleted things that could be recoverable. Some of these include sensitive data files, pdfs, and other personal items that I would not want retrieved. I ran a program called "Eraser" to try and overwrite that data, but it failed to do an adequate job. I also tried to do the job with "Glary Utilities" but it failed too. Short of installing a new, very cheap hard drive and re-installing the bare bones operating system, I'm out of ideas. EDIT - WOW!!! I was not really expecting this many GREAT ideas. My next question is this. If I go the DBAN route and truely wipe the hard drive, then restore my disc image (I use Acronis True Image) will it also restore the free space data? Does imaging just copy readable data? I have an old image of when the OS was first installed.

    Read the article

  • I keep losing wireless connection

    - by posfan12
    I have a WRT54GL v1.1 wireless router and a WUSB54G v4 wireless adapter, both made by Linksys. The router is in the living room by the TV and the my computer is in the bedroom. My ISP is Brighthouse. Operating System Microsoft Windows 7 Home Premium 64-bit SP1 CPU Intel Core 2 Duo E6600 @ 2.40GHz 36 °C Conroe 65nm Technology RAM 3.00GB Single-Channel DDR2 @ 333MHz (5-4-4-14) Motherboard eMachines EMCP73VT-PM (CPU 1) 26 °C Graphics ASUS VS247 (1920x1080@60Hz) 767MB GeForce GTX 460 (nVidia) 43 °C Hard Drives 466GB Seagate ST350041 8AS SCSI Disk Device (SATA) 35 °C Optical Drives HL-DT-ST DVDRAM GH41N SCSI CdRom Device Audio High Definition Audio Device The problem is that my Internet connection will work fine for 15 minutes or so. Then the data will just stop flowing. Windows says I am still connected, and the systray icon still shows five bars. But Comodo Firewall will stop showing up and down traffic, and another of my systray applications complains about a lack of connection. What I usually do is either disconnect from the network manually, or unplug and re-plug the USB adapter. At which point the connection will work properly for another 15 minutes. I've tried unplugging my router for 30 seconds and letting it reboot. I've also tried looking for a newer driver for my adapter but I seem to have the latest version 3.1.3.0. This is a recent problem starting about a week ago. For the previous several months things were working just fine. I haven't made any changes to my system that I am aware of. The only thing I did was open my case to blow the dust out of it, then put everything back together. How do I fix this issue?

    Read the article

  • How do I renew a Web Server certificate in Windows Server 2008?

    - by Mark Seemann
    The SSL certificate for my web site just expired a few days ago, and I would like to renew it. I originally issued it two years ago using my Windows 2008 Certificate Authority, and it's worked without a hitch in all that time, so I would like to renew the certificate as simply as possible to make sure that all the applications relying on that certificate continue to work. I can open an MMC instance and add the Certificates snap-in for the Local Computer. I can find the relevant certificate under Personal, but I can't renew it. When I select Renew certificate with new key I get the following message: Web Server Status: Unavailable The permissions on the certificate template do not allow the current user to enroll for this type of certificate. You do not have permission to request this type of certificate. However, I can't understand this, as I'm logged on as a Domain Admin and I'm running the MMC instance in elevated mode. I've checked the Web Server certificate template, and Domain Admins have the Enroll permission on this template. FWIW, I also tried rebooting the server. How can I renew the certificate?

    Read the article

  • Print jobs sent to server OK, but then get deleted

    - by Paul Morrison
    I have 2 HP computers, one running Win XP SP3, one running Win7. I have a Lexmark X4270 All-in-One printer attached to the Win7 machine via a USB port. I can print OK from the Win 7 machine, but when I print from the WinXP machine, the print job shows up in both print queues (showing the same number of bytes - which is good!), but then the status gets changed to "Deleting - Sent to printer", and that status shows up in both print queues. The print job then stays there until I do a cancel, followed by a system restart. FWIW the owner is shown as Guest, but I have permission for Everyone set to print... I believe I have up-to-date drivers; I don't believe it's a firewall problem. What I would like to see is the Win7 machine's reason for deleting my print jobs - is there a diagnostic tool available? Also, I notice that the port for this printer on the WinXP machine is set to USB001 - I would have thought something like \servername\sharedprinter would be more appropriate - and I can see that in the list of ports, but the system doesn't let me change the port name from USB001... Could someone shed some light? I have spent hours on this! TIA BTW I can do file sharing, no problem!

    Read the article

  • How to balance the root domain using NS records?

    - by Patrick McCurley
    I have two load balancers that balance incoming traffic across multiple data centers. These work fine. I can test them out by doing an 'nslookup mydomain.com xIP' I have now taken out DNS services with DYN.com to allow me to manage the DNS Zone file so that typing mydomain.com will ask my load balancers what the IP address is to resolve. Step 1 : the NS record for www. I set up A records (glue) for ns1 & ns2, then the corresponding NS record to delegate the DNS lookup to the balancers instead of DYN.com's nameservers. ns1.mydomain.com A [ip address of load balancer 1] ns2.mydomain.com A [ip address of load balancer 1] www.mydomain.com NS ns1.mydomain.com www.mydomain.com NS ns2.mydomain.com All is well - when I type www.mydomain.com, the requests get delegated to my load balancers who provide the IP address of the endpoint and the connect is made successfully. Step 2 : the NS record for root. This is where I run into problems. I need customers to be able to type 'mydomain.com' (without the www) and ALSO get delegated to the load balancers for the IP address. However - of the research I have done, and through the DYN control panel, it seems to be not allowed to provide an NS record for the root - as this overrides the default NS servers. How can i delegate both the root, and the www. to my load balancers?

    Read the article

  • How to avoid Windows Genuine Advantage for an XP update?

    - by hlovdal
    I am about to apply updates to a windows xp installation I have not booted in a couple of years. When going to update.microsoft.com, it forced me first to accept an activex installation and now it wants me to install wga: Windows Update To use this latest version of Windows Update, you will need to upgrade some of its components. This version provides you with the following enhancements to our service: <... useless list of "advantages" ... Details Windows Genuine Advantage Validation Tool (KB892130) 1.1 MB , less than 1 minute The Windows Genuine Advantage Validation Tool enables you to verify that your copy of Microsoft Windows is genuine. The tool validates your Windows installation by checking Windows Product Identification and Product Activation status. Update for Windows XP (KB898461) 477 KB , less than 1 minute This update installs a permanent copy of Package Installer for Windows to enable software updates to have a significantly smaller download size. The Package Installer facilitates the install of software updates for Microsoft Windows operating systems and other Microsoft products. After you install this update, you may have to restart your system. Total: 1.5 MB , less than 1 minute I have heard nothing but bad things about wga, and I absolutely do not want it installed on my system (this answer seems to give some options). Searching for "windows xp" at microsoft's web pages brought up this page which says Windows XP Service Pack 3 Network Installation Package for IT Professionals and Developers Brief Description This installation package is intended for IT professionals and developers downloading and installing on multiple computers on a network. If you're updating just one computer, please visit Windows Update at http://update.microsoft.com . ... File Name: WindowsXP-KB936929-SP3-x86-ENU.exe I am currently downloading this file. Will installing this bring my installation up to date with security updates? What about later updates whenever a new problem is discovered, how can i update without using wga?

    Read the article

  • Solaris Fibre Channel target - Configure QLogic QLA2340

    - by growse
    I'm currently trying to set up a small storage system as a fibre channel target. This is for testing, so I'm currently using Solaris (Nexenta) and a QLogic QLA2340 HBA. For some reason, the qlc and qlt drivers don't support the QLA2340, so I'm using the qla2300 driver from QLogic's website. I've also got the scli utility installed for configuration. The HBA is detected by the system. That said, it's not clear how I get from this point to a point where I have a ZFS volume being exposed as an FC target. I was originally following this guide (http://www.youtube.com/watch?v=yzEBd3l7Qn4) but it seems that without the qlc/qlt drivers, Sun's configuration tools won't work. Does that also imply that COMSTAR also won't work? What's the best way to expose an FC target with this setup? Most of the options I'm seeing in scli complain that the port state is LinkDown (it is, I've not plugged anything in yet). Do I have to have my FC client plugged up and working before I can configure the target? Apologies for the slight vagueness of the question, but I'm not overly familiar with the terminology.

    Read the article

  • How to prevent nginx from locking files on mounted samba partition in Centos 6

    - by Bruce Kirkpatrick
    I'm using nginx 1.3.8 inside a centos 6.3 virtualbox 4.2.4 virtual machine. The system is running the latest software available via yum update. The host OS is windows 7. The site files nginx is serving are on mounted samba partition, which is a folder on the host Windows system. I.e., inside linux, nginx paths are referring to /home/vhosts and this is mounted from D:\vhosts\ on windows. The samba partition is mounted as root with 777 privileges. /etc/fstab looks like this, but with real ip, username, password: //hostip/vhosts /home/vhosts cifs username=username,password=SECRETPASSWORD,uid=root,gid=root,file_mode=0777,dir_mode=0777,rw,_netdev 0 0 I.e. linux/nginx reads from the windows share, and not the opposite. in /etc/samba/smb.conf, I have tried to disable all samba locking features, but it seems to have no effect even after rebooting the virtual machine. locking=no share modes=no oplocks = no level2 oplocks = no kernel oplocks =no I'm receiving "Access is denied" errors in Windows or linux when attempting to overwrite the javascript file in windows that has been accessed at least once with nginx. If I run "service nginx reload", the lock is removed and I'm able to save the file. That's why I think it is nginx causing the lock. The same problem occurs with directories. However, that may be a different issue not related to the use of samba. I'm using samba so that I can manage the source code outside of the virtual machine. Also note that after I run "service nginx reload", the file I'm editing is actually automatically deleted from the windows host. SOLVED: I just reviewed my nginx.conf file. It appears the "open_file_cache" feature is what is causing the lock and deleted files. When I set this option to open_file_cache off;, My problem is resolved. I will repost as the answer when it allows me to do so.

    Read the article

  • Getting error ArrayIndexOutOfBoundsException: 2048 at everything I'm doing in Eclipse

    - by Bernhard V
    Hi, whatever I'm doing in Eclipse, I get an error. At start up I get an error at Java tooling initializing. I get an error when I want to open a type. And it's always the same error. For example, when opening a type I get: An internal error occurred during: "Cache refresh". 2048 The error at the start up also prints the error code as 2048. I'm using the most up to date version of Eclipse. Do you know a way to fix this issue? edit: Here the stack trace of the error at Java tooling initializing: java.lang.ArrayIndexOutOfBoundsException: 2048 at org.eclipse.jdt.internal.core.index.DiskIndex.readStreamChars(DiskIndex.java:870) at org.eclipse.jdt.internal.core.index.DiskIndex.initialize(DiskIndex.java:370) at org.eclipse.jdt.internal.core.index.Index.<init>(Index.java:96) at org.eclipse.jdt.internal.core.search.indexing.IndexManager.getIndex(IndexManager.java:248) at org.eclipse.jdt.internal.core.search.indexing.IndexManager.getIndexes(IndexManager.java:309) at org.eclipse.jdt.internal.core.search.PatternSearchJob.getIndexes(PatternSearchJob.java:81) at org.eclipse.jdt.internal.core.search.PatternSearchJob.ensureReadyToRun(PatternSearchJob.java:50) at org.eclipse.jdt.internal.core.search.processing.JobManager.performConcurrentJob(JobManager.java:174) at org.eclipse.jdt.internal.core.search.BasicSearchEngine.searchAllTypeNames(BasicSearchEngine.java:1122) at org.eclipse.jdt.core.search.SearchEngine.searchAllTypeNames(SearchEngine.java:713) at org.eclipse.jdt.internal.ui.dialogs.FilteredTypesSelectionDialog$ConsistencyRunnable.refreshSearchIndices(FilteredTypesSelectionDialog.java:653) at org.eclipse.jdt.internal.ui.dialogs.FilteredTypesSelectionDialog$ConsistencyRunnable.run(FilteredTypesSelectionDialog.java:636) at org.eclipse.jdt.internal.ui.dialogs.FilteredTypesSelectionDialog.reloadCache(FilteredTypesSelectionDialog.java:679) at org.eclipse.ui.dialogs.FilteredItemsSelectionDialog$RefreshCacheJob.run(FilteredItemsSelectionDialog.java:1502) at org.eclipse.core.internal.jobs.Worker.run(Worker.java:55)

    Read the article

  • "cannot receive new filesystem stream: invalid backup stream" error when unpacking flash archive on solaris 10

    - by Bovril
    I've searched around but i'm having no luck with some peculiar behavior with a flash archive. I'm using HP Server Automation 9.14 to deploy the OS. I'm creating a Solaris 10 flash archive to create a snapshot default build in our environment. I create the flash archive with # flar create -c -S -n g8-solaris10-u10 g8-solaris10-u10.flar It seems to create the file without any problems (exit status 0). When deploying to a new system (same hardware), it extracts to a point and then bails. The last error in the log I can see is Extracted 2047.00 MB ( 82% of 2488.98 MB archive) ERROR: Could not read file (172.27.118.100:/media/opsware/sunos/flar/g8-solaris10-u10.flar ERROR: Errors occurred during the extraction of flash archive. The file /tmp/flash_errors contains the list of errors encountered ERROR: Could not extract Flash archive ERROR: Flash installation failed The error log contained the following message cannot receive new filesystem stream: invalid backup stream A previous version of this flash archive (1.8gb) worked ok, so I suspect size may be a factor. The source system (the one the flash archive is an image of) is an HP BL460C GEN8 some more info below. OS version Info # uname -a SunOS testhostname 5.10 Generic_147441-01 i86pc i386 i86pc # who -r . run-level 3 Oct 15 08:15 3 0 S disks # echo | format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c0t0d0 <DEFAULT cyl 17841 alt 2 hd 255 sec 63> /pci@0,0/pci8086,3c06@2,2/pci103c,3355@0/sd@0,0 Specify disk (enter its number): Specify disk (enter its number): zpools # zpool list NAME SIZE ALLOC FREE CAP HEALTH ALTROOT rpool 136G 24.6G 111G 18% ONLINE - Zones # zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / native shared The file size of 2047 seems suspiciously close to 2048, which is concerning. Any help would be greatly appreciated. Thanks

    Read the article

  • glib2 64-bit compile fails on Solaris 10

    - by Aaron
    I'm encountering a problem building glib-2.26.1 on a Solaris 10 box - 64-bit. Goo diligence doesn't turn anything up, but no matter what I do the build fails in the same way. I've tried using the Sun Studio compiler, gcc (SFW) to no avail. When I compile I get the following error: [root@foo glib-2.26.1]$ export CC=/opt/solstudio12.2/bin/cc [root@foo glib-2.26.1]$ export CFLAGS="-m64" ...configure goes normally... [root@foo glib-2.26.1]$ make ...snip... source='gatomic.c' object='gatomic.lo' libtool=yes \ DEPDIR=.deps depmode=none /bin/bash ../depcomp \ /bin/bash ../libtool --tag=CC --mode=compile /opt/solstudio12.2/bin/cc -DHAVE_CONFIG_H -I. -I.. -I.. -I../glib -I../glib -I.. -DG_LOG_DOMAIN=\"GLib\" -DG_DISABLE_CAST_CHECKS -DG_DISABLE_DEPRECATED -DGLIB_COMPILATION -DPCRE_STATIC -DG_DISABLE_SINGLE_INCLUDES -D_REENTRANT -D_PTHREADS -m64 -c -o gatomic.lo gatomic.c libtool: compile: /opt/solstudio12.2/bin/cc -DHAVE_CONFIG_H -I. -I.. -I.. -I../glib -I../glib -I.. -DG_LOG_DOMAIN=\"GLib\" -DG_DISABLE_CAST_CHECKS -DG_DISABLE_DEPRECATED -DGLIB_COMPILATION -DPCRE_STATIC -DG_DISABLE_SINGLE_INCLUDES -D_REENTRANT -D_PTHREADS -m64 -c gatomic.c -KPIC -DPIC -o .libs/gatomic.o "gatomic.c", line 885: warning: no explicit type given "gatomic.c", line 885: syntax error before or at: * "gatomic.c", line 885: warning: old-style declaration or incorrect type for: g_atomic_mutex "gatomic.c", line 906: warning: implicit function declaration: g_mutex_lock "gatomic.c", line 909: warning: implicit function declaration: g_mutex_unlock "gatomic.c", line 1155: warning: implicit function declaration: g_mutex_new "gatomic.c", line 1155: warning: improper pointer/integer combination: op "=" cc: acomp failed for gatomic.c make[4]: *** [gatomic.lo] Error 1 make[4]: Leaving directory `/root/glib-2.26.1/glib' make[3]: *** [all-recursive] Error 1 make[3]: Leaving directory `/root/glib-2.26.1/glib' make[2]: *** [all] Error 2 make[2]: Leaving directory `/root/glib-2.26.1/glib' make[1]: *** [all-recursive] Error 1 make[1]: Leaving directory `/root/glib-2.26.1' make: *** [all] Error 2 Does anyone know where the build might be going wrong? Not sure where else to look here. Thanks.

    Read the article

  • If I can take a screen capture of a graphical anomaly, can it still be a hardware issue?

    - by Jay Carr
    I have a strange graphical anomaly going on my iMac right now (green and magenta boxes are appearing sporadically), I'm slowly trying to work through different possibilities but I thought I'd start with the basics: Can a graphical anomaly that I can screen capture still be a hardware issue? I know, it seems really obvious. If it's hardware, it should show up well after the operating system has had it's say. And since the operating system is (I assume) doing the screen capture, it seems like it shouldn't see the anomaly unless the problem is software in nature. But, as I've researched this problem I see a lot of people taking their computers in to service people for hardware issues and Apple then resolving said issue. To further complicate things, I also have Windows 8 installed via bootcamp, and the issues seems to be showing up there as well. Anyway, it feels like it must be a driver issue, since I assume that's what the two OSes have in common, but...I thought I'd come here for some disambiguation. In my case, yes, I can screen capture the anomaly (at least in OSX I can), so I assume it's somehow a software (or driver) issue. But I wanted to double check because the internet is being ambiguous...

    Read the article

  • Variable directory names over SCP

    - by nedm
    We have a backup routine that previously ran from one disk to another on the same server, but have recently moved the source data to a remote server and are trying to replicate the job via scp. We need to run the script on the target server, and we've set up key-based scp (no username/password required) between the two servers. Using scp to copy specific files and directories works perfectly: scp -r -p -B [email protected]:/mnt/disk1/bsource/filename.txt /mnt/disk2/btarget/ However, our previous routine iterates through directories on the source disk to determine which files to copy, then runs them individually through gpg encryption. Is there any way to do this only by using scp? Again, this script needs to run from the target server, and the user the job runs under only has scp (no ssh) access to the target system. The old job would look something like this: #Change to source dir cd /mnt/disk1 #Create variable to store # directories named by date YYYYMMDD j="20000101/" #Iterate though directories in the current dir # to get the most recent folder name for i in $(ls -d */); do if [ "$j" \< "$i" ]; then j=${i%/*} fi done #Encrypt individual files from $j to target directory cd ./${j%%}/bsource/ for k in $(ls -p | grep -v /$); do sudo /usr/bin/gpg -e -r "Backup Key" --batch --no-tty -o "/mnt/disk2/btarget/$k.gpg" "$/mnt/disk1/$j/bsource/$k" done Can anyone suggest how to do this via scp from the target system? Thanks in advance.

    Read the article

  • Why Is ModSecurity Unable to Access the Data Directory?

    - by tommytwoeyes
    Update I think we've solved this; the problem appears to have been a result of the /modsec_storage directory having an incorrect value for its SELinux context type. However, we're still not sure, because although after I changed the SELinux context type value, Apache was able to create files in that directory for the global and ip collections (global.dir/global.pag and ip.dir/ip.pag), the new files still have zero bytes. I'm new to ModSecurity and am not sure if the files are empty because something is wrong with the configuration or if ModSecurity has simply determined it doesn't need to store IP addresses persistently after each transaction ends. Anyone able to offer guidance here? I've recently installed ModSecurity (v2.5.12 / CRS v2.0.8) on our production server, and everything works great, except for these errors that it keeps writing to the Apache error log: Failed to access DBM file "/modsec_storage/global": Permission denied [hostname "www.internationalstudent.com"] [uri "/includes/soc_bookmarks/images/delicious.png"] [unique_id "LZ6jc38AAAEAAFO6408AAABO"] Failed to access DBM file "/modsec_storage/ip": Permission denied [hostname "www.internationalstudent.com"] [uri "/includes/soc_bookmarks/images/delicious.png"] [unique_id "LZ6jc38AAAEAAFO6408AAABO"] After following the instructions for file permission settings in the ModSecurity handbook by Ivan Ristic, with no success, I created a /modsec_storage directory, set the owner & group to apache, and set the permissions for the directory recursively to 777. However, ModSecurity is still reporting the same permission errors, so I am stumped. Can anyone tell me how to fix this?

    Read the article

  • How to deal with lots of brackets in a formula?

    - by wenlibin02
    Say, I have a formula like this (in LaTeX or Maple or other text system): Result: ((6*(k2+k3))*A123*k2*k3*(A12*A13*k2^2-2*A12*A13*k2*k3+A12*A13*k3^2-A123*k2^2-2*A123*k2*k3-A123*k3^2)*(exp(-k3*(k3^2*t-x)))^2+6*A12*(-k3+k2)*k2*k3*(A12*A13*k2^2-2*A12*A13*k2*k3+A12*A13*k3^2-A123*k2^2-2*A123*k2*k3-A123*k3^2)*exp(-k3*(k3^2*t-x)))*(exp(-k2*(k2^2*t-x)))^2+(-(6*(-k3+k2))*A13*k2*k3*(A12*A13*k2^2-2*A12*A13*k2*k3+A12*A13*k3^2-A123*k2^2-2*A123*k2*k3-A123*k3^2)*(exp(-k3*(k3^2*t-x)))^2-(6*(k2+k3))*k2*k3*(A12*A13*k2^2-2*A12*A13*k2*k3+A12*A13*k3^2-A123*k2^2-2*A123*k2*k3-A123*k3^2)*exp(-k3*(k3^2*t-x)))*exp(-k2*(k2^2*t-x)) Note: the above formula is only one part of the result of a maple calculation, I just can't break them up because there are so many many terms. Apparently, It's very hard to read. What I want to do is to fold the matched brackets level by level. If all the brackets are folded, I can find out clearly how many terms there are. Then I can analyze from the top level to the details of every term. But I just don't know how to realize that. Maybe there are some existed software which can visualize this kind of complex formula. Any idea? P.S. I use Linux system. The open source alternatives are better.

    Read the article

  • Dual booting Linux/Win7, Grub refuses to load Win7

    - by JohnB
    Decided to give Linux Mint a try (Ubuntu's interface annoys me), so I installed it with the intention of dual booting with Windows 7. Installation went fine, but now I can only boot into Linux Mint. Grub lists two Windows 7 menu options, but selecting either of them causes an "unknown file system" error and dumps me into a Grub recovery prompt. There, I have to manually reset the root and prefix options, as they reset hd0,msdos6 when they should be hd0,msdos5. I ran Boot Repair twice, once to fix grub errors, once to rebuild the MBR, but it didn't fix anything. Here is the log: http://paste.ubuntu.com/1029675/ fdisk output: Device Boot Start End Blocks Id System /dev/sda1 * 2048 206847 102400 7 HPFS/NTFS/exFAT /dev/sda2 206848 1486249145 743021149 7 HPFS/NTFS/exFAT /dev/sda3 1486249982 1953523711 233636865 5 Extended /dev/sda5 1486249984 1945141247 229445632 83 Linux /dev/sda6 1945143296 1953523711 4190208 82 Linux swap / Solaris grub.cfg: ### BEGIN /etc/grub.d/30_os-prober ### menuentry "Windows 7 (loader) (on /dev/sda1)" --class windows --class os { insmod part_msdos insmod ntfs set root='(hd0,msdos1)' search --no-floppy --fs-uuid --set=root 86184D18184D091F chainloader +1 } menuentry "Windows 7 (loader) (on /dev/sda2)" --class windows --class os { insmod part_msdos insmod ntfs set root='(hd0,msdos2)' search --no-floppy --fs-uuid --set=root 56D84F84D84F60FB chainloader +1 } ### END /etc/grub.d/30_os-prober ### I have found a few similar troubleshooting guides so far, but so far no amount of updating/configuring Grub has been successful. Last resort is, I suppose, use the W7 recovery disc and start over. Thanks in advance! Linux Mint 13 Maya, 64-bit Windows 7 Home Edition, 64-bit

    Read the article

< Previous Page | 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244  | Next Page >