Search Results

Search found 19212 results on 769 pages for 'side projects'.

Page 232/769 | < Previous Page | 228 229 230 231 232 233 234 235 236 237 238 239  | Next Page >

  • How do I hook into Tar with BASH?

    - by orb
    Long Story Short I am working with Tar archives that contain PNG images in base64 encoding. I would like to use BASH (or whatever else works) to hook into the extraction function of Tar to decode PNG images from base64 encoding to standard PNG encoding after the files are unpacked. A simple cat $input-file | base64 -d >$output-file will successfully decode the images. Is there a way I can hook into tar -xf so that users do not have to do any (or minimal) extra work to decode the images? In the GNU Tar documentation (http://www.gnu.org/software/tar/manual/html_chapter/Backups.html#SEC97) I found that there are in fact variables reserved to hold the names of functions I desire to be hooked into various moments in Tar program execution. However, the documentation explains that these variables, along with other variables that can be set to configure Tar, are located in a file named backup-specs. Unfortunately, the path to this file is not given. Further, running sudo find / -name backup-specs tells me that this file is not present on my Ubuntu version 13.04 system. Background Information not included in the Long Story Short I have been working on a browser-based (WebGL) particle effect creation application (http://www.particleeffect.org), (https://github.com/cgrabowski/webgl-particle-effect-editor), (https://github.com/cgrabowski/webgl-particle-effect). I have began to write a client-side-only solution for saving and loading effect data as a tar archive. However, since client-side JavaScript has limited capability to process binary data, the images used as textures in the effect are saved with base64 encoding. I have been able to implement saving effect data as a Tar archive (haven't pushed that to Github yet). However, the images present in said Tar archive cannot be manipulated unless they are decoded from base64 encoding.

    Read the article

  • Trying to delete directory with "rm -rf", but get message that it's not empty

    - by Ben Hocking
    I've tried deleting a directory using "rm -rf" and I'm getting the message "Directory not empty": Bens-MacBook-Pro:please benjaminhocking$ ls -lart empty_directory/ total 16 drwxr-xr-x 5 benjaminhocking staff 170 Aug 27 14:46 . drwxr-xr-x 3 benjaminhocking staff 102 Aug 27 15:28 .. Bens-MacBook-Pro:please benjaminhocking$ rm -rf empty_directory/ rm: empty_directory/: Directory not empty Bens-MacBook-Pro:please benjaminhocking$ rmdir empty_directory/ rmdir: empty_directory/: Directory not empty If I try the same thing using Finder (dragging the folder to the Trash), I get the message The operation can’t be completed because the item “empty_directory” is in use. I've tried doing xattr -d com.apple.quarantine, purely out of superstition, but it did no good. A probably important piece of context is that this directory was initially in a directory that should've been deleted by a "make clean" command I issued prior to Terminal locking up on me, after which a little over half of the other programs I had running also locked up, including Skype, and eventually the OS itself. I ended up having to reboot the computer by pressing and holding the power key. Edit to add: Another important piece of information I left off was that this was happening in an encrypted folder à la encfs. I was able to track down the corresponding folder in the encrypted side of things and delete it there. I still don't know why I couldn't do it from the decrypted side of things like I normally do. I'll leave this unanswered for now in case anyone has a good answer for that.

    Read the article

  • mysql command line not working

    - by Sandeepan Nath
    I have mysql running in my fedora system. I have xampp setup on the system and php projects present in the webspace are working fine. PhpMyAdmin is working fine. echoing phpinfo() in a PHP script also shows mysql enabled. But running mysql connect command mysql -u[username] -p[password] Gives this - bash: mysql: command not found How do I fix that? Any pointers? I guess I need to do some pointing (define some path in some file) so that my system knows that mysql is installed. What exactly do I have to do? Additional Details This system was someone else's and he is not available here. May be PHP/Mysql was setup already in the system. I just freshly extracted xampp for linux into /opt/lampp/ and have put all the above mentioned things (PHP projects and PhpMyAdmin) there. After doing that I had a socket problem (PhpMyAdmin was not working and showing this)- #2002 - The server is not responding (or the local MySQL server's socket is not correctly configured) I restarted lampp using ./lampp restart but problem remained. Then after turning on system today, I started lampp and everything worked just fine. No project issues anymore only command line Mysql not working

    Read the article

  • Expanding to dual video cards

    - by Anthony Greco
    I know a lot of factors can go into play here, so I will list my current hardware and setup: MOBO: GIGABYTE GA-890FXA-UD5 [http://www.newegg.com/Product/Product.aspx?Item=N82E16813128441] Processor: AMD Phenom II X6 1090T Black Edition Thuban 3.2GHz [http://www.newegg.com/Product/Product.aspx?Item=N82E16819103849] Ram: G.SKILL Ripjaws Series 16GB (4 x 4GB) [https://secure.newegg.com/NewMyAccount/OrderHistory.aspx?RandomID=4933910872745320111128011418] Current video card: EVGA 01G-P3-1366-TR GeForce GTX 460 SE [http://www.newegg.com/Product/Product.aspx?Item=N82E16814130591] OS: Windows 7 Ultimate x64 Currently I can run 2 monitors just fine in my setup. However, I want to upgrade this to 4 monitors. My question is, what is the best way to do this? I remember in the past reading I need the same type of video card, however would any GeForce GTX work, or do i need that very specific model (EVGA 01G-P3-1366-TR GeForce GTX 460 SE)? Are there any issues I should be aware of before I order 2 new monitors and a video card? Are there video cards better setup for this? I know NVidia offers SLI, however I do not know if my mobo is compliant. My mobo also offers CrossFireX configuration, though from what it says only Radeon are compliant. Any suggestions / feedbacks on my best route with my current setup is appreciated. Even if you suggest buying 2 new identical video cards, as long as you can mention which and why that is better I really appreciate it. Note: I really do not do any gaming. I sometimes do some 3D work in Unity and very rarely in Maya. Besides that I mostly do all my computer work in Visual Studios and Photoshop. I however need the 2 extra monitors because I monitor sometimes 5 remote desktops at once and switching on only 2 is becoming a very big pain. Also seeing 3 side by side while I work on the 4th will be very helpful. Again, I appreciate any feedback, as I have googled a bunch and just want to make sure what I buy will work.

    Read the article

  • Troubleshooting a Windows 7 PC that wouldn't sleep

    - by NPE
    I have a new Windows 7 PC that wouldn't sleep (not just automatically, but also when specifically told to). The screen goes black momentarily, but within two seconds the machine comes back as if nothing has happened. I tried powercfg energy. This produces some errors quoted at the bottom of this post, plus some warnings about timer resolution. There are no USB devices connected other than wireless keyboard + mouse (Logitech MK250); I tried unplugging them to no effect. The motherboard is Asus P7P55D-E. powercfg lastwake says "Wake History Count - 0", which I take to mean that it never actually went to sleep. I dual boot into Ubuntu, and was having exactly the same problem on the Linux side. That turned out to do with USB 3.0, which I've now disabled in the BIOS. This has solved the problem on the Ubuntu side of things, but made no difference to Windows 7. Any suggestions? Suspend:USB Device not Entering Suspend The USB device did not enter the Suspend state. Processor power management may be prevented if a USB device does not enter the Suspend state when not in use. Device Name Generic USB Hub Host Controller ID PCI\VEN_8086&DEV_3B34 Host Controller Location PCI bus 0, device 29, function 0 Device ID USB\VID_8087&PID_0020 Port Path 1 USB Suspend:USB Device not Entering Suspend The USB device did not enter the Suspend state. Processor power management may be prevented if a USB device does not enter the Suspend state when not in use. Device Name USB Root Hub Host Controller ID PCI\VEN_8086&DEV_3B34 Host Controller Location PCI bus 0, device 29, function 0 Device ID USB\VID_8086&PID_3B34 Port Path USB Suspend:USB Device not Entering Suspend The USB device did not enter the Suspend state. Processor power management may be prevented if a USB device does not enter the Suspend state when not in use. Device Name USB Composite Device Host Controller ID PCI\VEN_8086&DEV_3B34 Host Controller Location PCI bus 0, device 29, function 0 Device ID USB\VID_046D&PID_C52E Port Path 1,8

    Read the article

  • computer build for extreme tabbed browsing

    - by David Berger
    I'm interested in building or buying a task-specific computer for my brother. His requirements are ridiculously simple: the machine has to be able to wait in hundreds of web-based virtual waiting rooms at once and not crash. To be competitive, he needs to be able to enter the waiting rooms an dauto-refresh them faster. My question is, what priority do I give the different specs? My initial surmise is this: Connection speed (nothing to do with my build, but I kind of think this will be more beneficial than anything I build for him) Memory size -- I don't usually see firefox taking up more than a gig, even when heavily tabbed, but I think one gig for the operating system and two gigs for the browser are necessary. Processor speed -- I think the processor will affect performance, but even something out of date will do what he needs Memory speed/RAM bus -- I doubt this will matter much, but it seems just on this side of irrelevant. Everything else is a non-issue for him. Does this seem to stack up correctly? Also, since he's looking to stay on the cheaper side, and I might end up recommending a refurb to him, is there anything particularly egregious that Vista would do if it came pre-installed? If I build it myself, I'll give him linux, but if I have it shipped to him, I'm not sure I could walk him through the install process for linux, but I probably could walk him through the process to upgrade to Windows 7, if it were somehow worth it.

    Read the article

  • Copying compressed files from Server 2008 R2 network share to XP client via VPN fails

    - by Dejan Janjuševic
    At the first sight the question looks similar to this one. I have experienced an odd behavior while trying to copy a certain file from Windows Server 2008 R2 network share to Windows XP Professional client via VPN. The VPN was set up using RRAS on the server machine. I will try to provide as much informations as possible in order to make the issue more clear. When trying to copy the compressed file sized ~2.5 MB (via Explorer or CMD, doesn't matter), the process stalls after some 20%, producing an error message after few seconds: Cannot copy filename: The specified network name is no longer available. If i start the command ping -t 192.168.2.1 (where the IP address specified belongs to the server) side by side with the copy command, I can clearly see that the ping command times out for few seconds as the copy process stalls. When this happens all network activities are frozen. After a few seconds, the network recovers, ping continues to run normally, however the copy process stands still before it displays the above error message. Copying other files (I tried 4-5 files), of which some are larger and some are smaller, succeeds. Seems to me that I can copy all uncompressed files. As soon as I try to copy an archive, the process freezes. Even a 707 KB large archive can't be copied. I can only reproduce this behavior on 2 machines, both Windows XP Professional, one is w/ SP2 and the other w/ SP3. Other XP clients don't have this problem, neither do Windows 7 clients. If I connect to the server using Remote Desktop Connection without using VPN from either of these 2 machines (using the same user account), I can copy anything I want normally, even these "problematic" files. Does anyone have any clue about what could possibly be going on?

    Read the article

  • Unable to specify parameters to cvlc in a script

    - by VxJasonxV
    I'm creating a script that issues a few curl commands in order to access a time-protected mms stream link, then set up a relay using cvlc (vlc's command line interface) for my own use on an unencumbered player. The curl aspect of this is working, as I can run as a browser and curl side by side and get the same access url. (It's time locked meaning the stream will work forever, but you have to connect quickly or the URL will time out.) The very end of the script prints the command I will run, which is then followed up by "exec $CMD". When I echo $CMD I get: cvlc --sout '#standard{access=http,mux=asf,dst=0.0.0.0:58194}' mms://[...] Manually Copy/Pasting this command in, verbatim, works perfectly fine, but as part of a script, the cvlc execution output says: [0x9743d0] main interface error: no suitable interface module [0x962120] main libvlc error: interface "globalhotkeys,none" initialization failed [0x9743d0] dummy interface: using the dummy interface module... [0xb16e30] stream_out_standard stream out error: no mux specified or found by extension [0xb16ad0] main stream output error: stream chain failed for `standard{mux="",access="",dst="'#standard{access=http,mux=asf,dst=0.0.0.0:58194}'"}' [0xb11cd0] main input error: cannot start stream output instance, aborting [0xb11f70] signals interface error: Caught Interrupt signal, exiting... Why is --sout behaving one way in a script (non-interactive shell?) vs. another way in the foreground (interactive shell) ?

    Read the article

  • knife on Windows inconsistently reads ~\.ssh\knife.rb on Management Workstation

    - by gWaldo
    I am implementing a new instance of (Open-source v10.12) Chef in an existing environment. Currently the environment is mostly Windows, but more Linux is being introduced. I have used Chef in a previous gig, however that was a *nix-only environment. Because this is a primarily-Windows environment, my main workstation is Windows 7 (x64), and I use Powershell as my main terminal. I created a ~\.chef directory, populated with a knife.rb and my client.pem file. When I run knife client list from ~, I get the expected results. I keep my work in Dropbox just in case my laptop should fail or be stolen. When I run knife client list from the repo directory (C:\Users\waldo\Dropbox_company\projects\chef`), I get ERROR: Your private key could not be loaded from C:/home/waldo/.chef/waldog.pem Check your configuration file and ensure that your private key is readable (Note that the path is incorrect) This is the progression as I walk up the tree towards my ~ running knife client list: C:\Users\waldo\Dropbox\_company\projects\ => Above error C:\Users\waldo\Dropbox\_company\ => Above error C:\Users\waldo\Dropbox\ => It works! (Expected results) C:\Users\waldo\ => Expected results C:\Users\waldo\Documents\ => Expected Results C:\Users\waldo\Documents\GitHub => Expected Results C:\Users\waldo\Documents\GitHub\aProject\ => Expected Results What. The. Eff! Now, I know that I can add -c path\to\knife.rb, but that's a HUGE PITA. Question is: Why is knife inconsistently reading my ~\.chef\knife.rb, and how can I get around that without incurring carpal tunnel?

    Read the article

  • Are there any open source reseller packages?

    - by Tom Wright
    My department has just been given the right/responsibility to manage our own VPS. The idea being that the bureaucracy will be less for the many small web projects we run. Since each project will be managed by a different team, I was planning on approaching a shared hosting model. Are there any free pieces of software that would help automate the provision of resources each time a team request a new project? Most of the projects have identical requirements - basically LAMP - so it would be these resources that I would want provisioning (and de-provisioning, if that is a word) automatically. Ideally, there would also be a way to hook it into our LDAP authentication backend too, though I could probably make this sort of modification if necessary. Since we won't be charging our "client" however, we won't need the ability to generate invoices, handle payments, etc. etc. EDIT: Sample workflow Login authenticated against LDAP Username checked against admin group (not on central LDAP) Click 'new project' and enter project name User created on VPS with project name as username Apache virtual host created and subdomain (using project name) allocated FTP & MySQL users created

    Read the article

  • Splitting build cross the network?

    - by Dandikas
    Is there a known solution for splitting build process cross the network machines? Use case: We are an average software development company. We own around 50 development workstations (Quad Core 2.66Ghz, 4 GB ram, 200 GB raid). No need to tell that at any single moment not every machine is loaded to the max. There are 5 to 15 projects running simultaneously at any single moment. Obviously all of them are continuously build on server, than deployed to proper environment. Single project build is taking from 3 to 15 minutes. The problem: Whenever we build 5 projects in a row the last project is going to be ready after around 25 - 50 minutes. Building in parallel does not solve the problem (build is only a part of the game, than you need to deploy, run tests etc.) YES the correct solution is to add another build server, but "That involves buying new Expensive hardware, and we already spent a lot!". Yea, right(damn them)! Anyway. What about splitting build among developers workstation? Lets say whenever we need to build project "A" we check 5 workstations and start build on all that are not overloaded. The build can be canceled by a developer if he really needs all the power of his machine as long as there is at least 1 machine that is still building. After build is finished deployment can be performed to a proper environment (hosted on some server, not on workstation :) ). The bigger the company the more this makes sense to me. Anyone tried something like this? Are there any good practices? Any helpful software?

    Read the article

  • iptables secure squid proxy

    - by Lytithwyn
    I have a setup where my incoming internet connection feeds into a squid proxy/caching server, and from there into my local wireless router. On the wan side of the proxy server, I have eth0 with address 208.78.∗∗∗.∗∗∗ On the lan side of the proxy server, I have eth1 with address 192.168.2.1 Traffic from my lan gets forwarded through the proxy transparently to the internet via the following rules. Note that traffic from the squid server itself is also routed through the proxy/cache, and this is on purpose: # iptables forwarding iptables -A FORWARD -i eth1 -o eth0 -s 192.168.2.0/24 -m state --state NEW -j ACCEPT iptables -A FORWARD -m state --state ESTABLISHED,RELATED -j ACCEPT iptables -A POSTROUTING -t nat -j MASQUERADE # iptables for squid transparent proxy iptables -t nat -A PREROUTING -i eth1 -p tcp --dport 80 -j DNAT --to 192.168.2.1:3128 iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -j REDIRECT --to-port 3128 How can I set up iptables to block any connections made to my server from the outside, while not blocking anything initiated from the inside? I have tried doing: iptables -A INPUT -i eth0 -s 192.168.2.0/24 -j ACCEPT iptables -A INPUT -i eth0 -j REJECT But this blocks everything. I have also tried reversing the order of those commands in case I got that part wrong, but that didn't help. I guess I don't fully understand everything about iptables. Any ideas?

    Read the article

  • All virtualhosts serving Apache default files

    - by tj111
    I'm trying to configure Apache as an in-network webserver, and am using the sites-available/sites-enabled feature as opposed to just static vhost files. I set up a couple VirtualHosts, all with a unique DocumentRoot, however request for all the VirtualHosts just serve up the "It's Working!" default file. I can't for the life of me figure out why it won't serve the content out of the correct directory. Here's the contents of the virtualhost directive files, let me know if I need to post more. default (note that apache renames this to 000-default in sites-enabled, so it's not an ordering issue) NameVirtualHost *:80 ServerName emp <VirtualHost *:80> ServerAdmin webmaster@localhost ServerName emp DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> billmed <VirtualHost *:80> ServerName billmed.emp ServerRoot /home/empression/Projects/billmed/web/httpdocs <Directory "/home/empression/Projects/billmed/web/httpdocs"> Order Allow,Deny Allow from All </Directory> </VirtualHost> Note that I have DNS zones for both emp and billmed.emp, as well as entries in /etc/hosts. My ultimate goal is to set up this machine as an in-house webserver with a custom tld (emp), but progress has been pretty slow.

    Read the article

  • Cisco Access switch is dropping large amount of end points

    - by user135458
    This afternoon, with no changes to the network, a switch suddenly started dropping off lots of connections. These connections would come back up a few minutes later, then another area connected to the switch would drop off. This is an older 4006 chassis switch which could in and of itself be a problem but I'm looking to see what else you all would look for in trying to find a root cause. Switch is connected via ports 1/1 and 1/2 in an etherchannel to a VSS core 1/1/42 and 2/1/42. Both sides are up and working however the CPU on the switch will spike up to 99% and that's when CRC errors start to hit the VSS core on one of those interfaces and end points start dropping off. We tried new transceivers and SFP's on each side of the link, same result. When we tried swapping the fiber patch cables on the access switch the CRC errors did not follow the fiber cables they stayed with port 1/2 on the access switch. So port 1/2 on the supervisor module looks like the culprit. We actually tried to create a new member of the ethernet channel by taking a fiber media converter to cat5 and make that a member of the port-channel but when we plugged it in you couldn't even reach the switch. I'm guessing that's unrelated and a problem with the media converter. As of right now we have left it in a state of only one fiber cable running to one side of the VSS core (1/1 Access Switch -- 2/1/42). I've sent some info into TAC and they are looking into the situation but does anyone else have any commands I could run or some troubleshooting I could look into in the meantime?

    Read the article

  • Internet Troubles - PPPoE vs PPPoA?

    - by AkkA
    I have been having some internet troubles at home (ADSL2+ connection in Australia). We get random drop-outs from the authentication connection. It will keep the connection to the DSL service, but we lose authentication and either have to restart the router/modem (its combined, a Belkin one, not sure on model number) or unplug the phone cable, wait about 30 seconds and plug it in again. I've called the ISP (Telstra) a few times, but they only offer limited support when we dont use their supported hardware. Apparently something had happened on their side, they checked the box again (at least it sounded that simple), and told me it would be fine. It wasnt. I've replaced all the filters around the house, but that didnt help either. We do live a little bit away from the exchange (get a sync speed of about 3000/900), so I thought it could be due to line noise but that hasnt helped. Telstra allow both PPPoE and PPPoA connections (which I'm configuring through my router, dont have software on the PC side). I've been running PPPoA the whole time, would it make any difference changing it to PPPoE? If not, are there any other theories as to why we would be experiencing these drop-outs? It has been fine for at least 12 months, then suddenly started about 2 months ago.

    Read the article

  • Apache in OS X not displaying localhost nor vhosts correctly

    - by Marcus
    I've encountered a really odd problem in my development environment, and I really can't make any sense of it. It started by a locally developed PHP-site refused to update any content I edited in a file – no text or nothing. So if the document was: <h2>Hello!</h2> and I edited it to <h2>What's wrong?!</h2> it still outputed <h2>Hello!</h2>. I thought is was some kind of cache:ing problem, but no "hard reloads" in the browser nor sudo apachectl -k restart sorted it out. Only a restart of my Mac did finally fix it. Now, a few days later even stranger issues are appearing. I have a LAMP-stack installed via Homebrew, in httpd-vhosts.conf I've set ~/Dev/ as my localhost, and I set up a <VirtualHost *80> for each project ("ServerName projectname.dev" for example). However, what ever files of folder I put in ~/Dev/ have stopped showing up on localhost, and new VirtualHost-directives doesn't work. Three projects + "docs" in the folder: But "localhost" only displays the two older projects...? So, as I've said – I've tried restarting Apache (without errors), clearing browser caches (tried in three browsers, Chrome, Safari and Firefox) and ever rebooting the Mac. Nothing. Any ideas? Running OS X 10.8.5 and Apache 2.2.24.

    Read the article

  • PHP displays blank white page even with all error reporting enabled

    - by Andy Shinn
    I am trying to debug a broken page in a Drupal application and am having a hard time getting PHP to spit anything useful out. I have the following set: error_reporting = E_ALL display_errors = On display_startup_errors = On log_errors = On error_log = /var/log/php/php_error.log I have a file showing me phpinfo() which confirms these variables are set correctly for the environment. I have increased memory_limit to 256M (which should be more than enough). Yet, the only indication I get is a status 500 code in the apache access log and a blank white page from PHP. The Apache virtual host has LogLevel set to debug and the error log only outputs: [Sat Jun 16 20:03:03 2012] [debug] mod_deflate.c(615): [client 173.8.175.217] Zlib: Compressed 0 to 2 : URL /index.php, referer: http://ec2-174-129-192-237.compute-1.amazonaws.com/admin/reports/updates [Sat Jun 16 20:03:03 2012] [error] [client 173.8.175.217] File does not exist: /var/www/favicon.ico [Sat Jun 16 20:03:03 2012] [debug] mod_deflate.c(615): [client 173.8.175.217] Zlib: Compressed 42 to 44 : URL /favicon.ico The PHP error log outputs nothing at all. kernel and syslog show nothing related to Apache or PHP. I have also tried installing suphp and checking its log just confirms the user is executing correctly: [Sat Jun 16 20:02:59 2012] [info] Executing "/var/www/index.php" as UID 1000, GID 1000 [Sat Jun 16 20:05:03 2012] [info] Executing "/var/www/index.php" as UID 1000, GID 1000 This is on Ubuntu 12.04 x86_64 with the following PHP modules: ii php5 5.3.10-1ubuntu3.1 server-side, HTML-embedded scripting language (metapackage) ii php5-cgi 5.3.10-1ubuntu3.1 server-side, HTML-embedded scripting language (CGI binary) ii php5-cli 5.3.10-1ubuntu3.1 command-line interpreter for the php5 scripting language ii php5-common 5.3.10-1ubuntu3.1 Common files for packages built from the php5 source ii php5-curl 5.3.10-1ubuntu3.1 CURL module for php5 ii php5-gd 5.3.10-1ubuntu3.1 GD module for php5 ii php5-mysql 5.3.10-1ubuntu3.1 MySQL module for php5 So, what am I missing here? Why no error reporting?

    Read the article

  • Running phpmyadmin xampp Ubuntu 12.10

    - by Luigi Tiburzi
    I know it is a common problem and there are many solutions on the web but I'm trying everything and anything is working, I can't have phpmyadmin running on my machine. I installed XAMPP through: sudo tar xvfz ./Downloads/xampp-linux-1.8.1.tar.gz -C /opt then I did the chmod trick supposed to make an end to access issues and I change the default location to my php projects from /var/www to Dropbox/php. Then I started XAMPP in the usual way: sudo /opt/lampp/lampp start When I tried to run one of my php projects the output on the web is fine but if for example I try to write localhost on my browser I get: It works and not the usual XAMPP interface and most of all when I try to access localhost/phpmyadmin I get the login page, insert username (root) and password and I get: You don't have permission to access /phpmyadmin/index.php on this server. Apache/2.2.22 (Ubuntu) Server at localhost Port 80 I tried the Required all granted trick and some others but nothing is working. I even tried to uninstall phpmyadmin and reinstall it but this is not working too. I don't know hot to proceed. Thanks for your help.

    Read the article

  • SVN, Samba and Symbolic Links. How to get them all to play together?

    - by Camsoft
    I've got a website project under version control that relies on files from an unversioned directory on the same server via Symbolic Links. I'm currently storing the symbolic links in the repository. The idea is that if someone checks out a working copy on to the same server they can edit and test the working copy of the project before committing it back to the repository. When they checkout their working copy it successfully sets up the symlinks so that the entire site works when testing. The users that work on the project are Windows users, so I've set a samba shares on the server and then mapped them to network drives in Windows. People can edit their working copies directly on the server via network shares and then test them in the web browser before committing their changes back to the repository via TortoiseSVN. The Problem The problem I have is that Samba resolves the symlinks as expected but when a user tries to commit their changes back to the repository, TortoiseSVN thinks the linked files are part of the project and tries to commit the target files to the repository and not the symlinks themselves. I tried turning off symlink support in samba which means that the linked files cannot be resolved as I don't really want people to have access to the linked files nor do I want to import the linked files in the repository. The problem with this is that I get Can't stat '\webserver\projects\working\project\symlinked_file.php'. Access is denied Apart from the symlink problem everything else works 100% perfectly. Users can either checkout website projects to their machine and work on them (but can't test) or checkout them out to their space on the dev web server and work on them and fully test. So I don't want to change the workflow process, I just need a solution to the symbolic link issue. Many thanks. Originally posted on StackOverflow: http://stackoverflow.com/questions/2400917/svn-samba-and-symbolic-links-how-to-get-them-all-to-play-together

    Read the article

  • Unable to run cvlc in a script

    - by VxJasonxV
    I'm creating a script that issues a few curl commands in order to access a time-protected mms stream link, then set up a relay using cvlc (vlc's command line interface) for my own use on an unencumbered player. The curl aspect of this is working, as I can run as a browser and curl side by side and get the same access url. (It's time locked meaning the stream will work forever, but you have to connect quickly or the URL will time out.) The very end of the script prints the command I will run, which is then followed up by "exec $CMD". When I echo $CMD I get: cvlc --sout '#standard{access=http,mux=asf,dst=0.0.0.0:58194}' mms://[...] But the cvlc execution output says: [0x9743d0] main interface error: no suitable interface module [0x962120] main libvlc error: interface "globalhotkeys,none" initialization failed [0x9743d0] dummy interface: using the dummy interface module... [0xb16e30] stream_out_standard stream out error: no mux specified or found by extension [0xb16ad0] main stream output error: stream chain failed for `standard{mux="",access="",dst="'#standard{access=http,mux=asf,dst=0.0.0.0:58194}'"}' [0xb11cd0] main input error: cannot start stream output instance, aborting [0xb11f70] signals interface error: Caught Interrupt signal, exiting... Why is it ignoring my --sout input?

    Read the article

  • Intermittent FTP login issues (Microsoft IIS FTP Service)

    - by JaggenSWE
    I've got a somewhat weird problem which I'm not sure how to troubleshoot. We have a FTP running on a Windows Server 2003 machine using the IIS FTP Service, this is for our clients and is configured with IP-restrictions. However, now ONE of the clients starts complaining that they can't log in to the server from time to time. This is just ONE of 10+ clients that have this issue, which makes me think it's a problem on their side. Just to be on the safe side I had a peek into the FTP logs and found something strange. Whenever succeed in loggin in this is what I can find in the logs: nnn.nnn.nnn.70, userxxx, 2012-06-11, 09:22:32, MSFTPSVC1, SERVERNAME, nnn.nn.nn.11, 0, 0, 0, 331, 0, [191747]USER, userxxx, -, nnn.nnn.nnn.70, userxxx, 2012-06-11, 09:22:32, MSFTPSVC1, SERVERNAME, nnn.nn.nn.11, 0, 0, 0, 230, 0, [191747]PASS, -, -, However, if the login fails I see the following events: nnn.nnn.nnn.70, userxxx, 2012-06-11, 09:16:33, MSFTPSVC1, SERVERNAME, nnn.nn.nn.11, 0, 0, 0, 331, 0, [191739]USER, userxxx, -, nnn.nnn.nnn.70, -, 2012-06-11, 09:16:33, MSFTPSVC1, SERVERNAME, nnn.nn.nn.11, 0, 0, 0, 530, 1326, [191739]PASS, -, -, When you look at the event where the clients sends the PASS in the successful login it seems to know that it is infact "userxxx" that is coupled to that PASS, but when it fails it seems to be lost since user in the PASS event is set to "-". Anyone have any ideas around this, any help would be appreciated. :) //JaggenSWE

    Read the article

  • SSL connection error for only one site (of many) on server

    - by Matt Lacey
    I have a server running many websites, each with SSL. One of the sites is now refusing connections over SSL. This was previously working and I'm looking for assistance in determining what has been changed. Here's the situation: http://site1.com/ - works https://site1.com/ - works http://site2.com/ - works https://site2.com/ - Doesn't work (but did previously) Both sites are on the same server (Win Server 2003 SP2 - IIS6) Both sites use certificates from the same authority and are both valid (according to IIS). As far as I can tell, both sites have certificates configured identically in IIS. (Checked by a manual/visual check of properties, side by side) Through use of OpenSSL I can see that there's a "ssl handshake failure" when trying to connect to site2 using https. What could be the cause of this? How can I investigate further? Without SSL connections being available to this site, users are unable to log in or register. :( disclaimer: I'm not a server admin and not responsible for the box. Yes, there are wider issues here but I need to get this working again first.

    Read the article

  • sendmail on Snow Leopard

    - by Jay
    I'm trying to get sendmail working on my MacBook Pro (OS 10.6.4), so that I can send mail with PHP's mail() function. If you know how to do this without sendmail, I'd be interested in that also. The plan is to send mail through smtp.gmail.com using my gmail account, unless you have a better idea. I did this and that didn't work. In /etc/postfix/smtp_sasl_passwords I tried both:     smtp.yourisp.com username:password and     smtp.yourisp.com [email protected]:password The problem seems to be that google doesn't like me. I don't think my ISP is blocking it because Mail.app can send email through smtp.gmail.com just fine. $email is my gmail address. $ printf "Subject: TestMail" | sendmail -f $email $email $ tail /var/log/mail.log Oct 21 19:38:18 Jays-MacBook-Pro postfix/master[8741]: daemon started -- version 2.5.5, configuration /etc/postfix Oct 21 19:38:18 Jays-MacBook-Pro postfix/qmgr[8743]: CAACBFA905: from=<$email>, size=377, nrcpt=1 (queue active) Oct 21 19:38:18 Jays-MacBook-Pro postfix/pickup[8742]: C2A68FA93A: uid=501 from=<$email> Oct 21 19:38:18 Jays-MacBook-Pro postfix/cleanup[8744]: C2A68FA93A: message-id=<20101021233818.$mydomain> Oct 21 19:38:18 Jays-MacBook-Pro postfix/qmgr[8743]: C2A68FA93A: from=<$email>, size=377, nrcpt=1 (queue active) Oct 21 19:38:18 Jays-MacBook-Pro postfix/smtp[8746]: initializing the client-side TLS engine Oct 21 19:38:18 Jays-MacBook-Pro postfix/smtp[8748]: initializing the client-side TLS engine Oct 21 19:38:49 Jays-MacBook-Pro postfix/smtp[8746]: connect to smtp.gmail.com[74.125.157.109]:25: Operation timed out Oct 21 19:38:49 Jays-MacBook-Pro postfix/smtp[8748]: connect to smtp.gmail.com[74.125.157.109]:25: Operation timed out Oct 21 19:38:49 Jays-MacBook-Pro postfix/smtp[8746]: CAACBFA905: to=<$email>, relay=none, delay=1334, delays=1304/0.04/30/0, dsn=4.4.1, status=deferred (connect to smtp.gmail.com[74.125.157.109]:25: Operation timed out) Oct 21 19:38:49 Jays-MacBook-Pro postfix/smtp[8748]: C2A68FA93A: to=<$email>, relay=none, delay=30, delays=0.08/0.05/30/0, dsn=4.4.1, status=deferred (connect to smtp.gmail.com[74.125.157.109]:25: Operation timed out) $ I also tried setting myhostname, mydomain, and myorigin in /etc/postfix/main.cf to $ nslookup myip (as displayed by http://www.whatismyip.com/) And still no luck. Any ideas?

    Read the article

  • What should be monitored to troubleshoot file sharing problems?

    - by RyanW
    I'm running into some problems with a file share used by an ASP.NET web application. With this configuration, there are 2 web servers (win2k8 web) that connect to a file server (win2k8 enterprise), reading and writing files using a file share. Recently, one of the web servers has begun encountering an error accessing the file share: IOException: The specified network name is no longer available. There does not appear to be much info on the web for explaining what's causing this and how to best fix it, so I'm looking at what I can monitor in order to get clues. I'm not sure if it's hardware, just a load issue, file size, frequency, etc. With Windows perfmon, what can I monitor on the File Server side? There's the "Files Open" object, any other good ones? What can I monitor on the web server side? EDIT: I'll add that the UNC path uses the IP address of the file server, not a name to resolve. Also the share is a single, flat directory with over 100K files.

    Read the article

  • Join multiple filesystems (on multiple computers) into one big volume

    - by jm666
    Scenario: Have 10 computers, each have 12x2TB HDDs (currently) in raidZ2 (10+2) configuration, so, in the each computer i have one approx. 20TB volume. Now, need those 10 separate computers (separate raid groups) join into one big volume. What is the recommended solution? I'm thinking about the FCoE (10GB ethernet). So, buying into each computer FCoE (10GB ethernet card) and - what need more on the hardware side? (probably another computer, FCoE switch? like Cisco Nexus?) The main question is: what need to install and configure on each computer? Currently they have freebsd/raidz2, but it is possible change it into Linux/Solaris if needed. Any helpful resource what talking about how to build a big volumes from smaller raid-groups (on the software side) is very welcomed. So, what OS, what filesystem, what software - etc. In short: want get one approx. 200TB storage (in one filesystem) from already existing computers/storage. Don't need fast writes, but need good performance on reading data. (as a big fileserver), what will works transparently, so when storing data don't want care about onto what computer the data goes. (e.g. not 10 mountpoints - but one big logical filesystem). Thanks.

    Read the article

< Previous Page | 228 229 230 231 232 233 234 235 236 237 238 239  | Next Page >