Search Results

Search found 75840 results on 3034 pages for 'file servers'.

Page 556/3034 | < Previous Page | 552 553 554 555 556 557 558 559 560 561 562 563  | Next Page >

  • Unity-webapps: what to do with the downloaded file?

    - by user104293
    I have installed the unity-webapps package in Ubuntu 12.04. (sudo apt-get install unity-webapps) Now I want to see what the fuss is about, so I go to http://bazaar.launchpad.net/~webapps/webapps-applications/trunk/files/head:/src/ and click on one of the options (e.g. Google Calendar). I download one of the files (GoogleCalendar.user.js). What do I do with this file? Is my webapps working? This is not what I was expecting to happen.

    Read the article

  • When should NTPd broadcast/broadcastclient be used instead of client/server or peer modes?

    - by Luke404
    The NTP deamon if often used in its simplest mode, which is client/server: you specify one or more server directives in your ntp.conf and your clients will use those servers. In addition to that, when you run your own NTP servers, it is good practice to peer them together, so if one of them looses connectivity to its upstream servers, it will get time from its peers. But NTPd can also work with broadcast and/or multicast distribution of time data, with the documentation stating: broadcast and multicast modes are intended for configurations involving one or a few servers and a possibly very large client population The documentation also says elsewhere: It is possible and frequently useful to configure a host as both broadcast client and broadcast server. A number of hosts configured this way and sharing a common broadcast address will automatically organize themselves in an optimum configuration based on stratum and synchronization distance. I can see one obvious administrative benefit: you don't have to manually specify and update your list of NTP servers in the clients ntp.conf, so to me it looks tempting to use broadcast mode even for a small client population (say 5+ clients with 3~4 servers). I expect network traffic to be a little higher with broadcasts instead of client/server associations, but given the usual gigabit ethernet LAN the impact should be negligible unless you have a very very large number of hosts in the same broadcast domain. At the end of the day, when should broadcast mode be used or avoided? Are there pros and cons I haven't seen?

    Read the article

  • Using NFS for scalable PHP/MySQL web application

    - by Jeroen Moons
    Here's the situation: I have a PHP/MySQL web application that accepts user uploads (pdf files). From these pdf files' pages a preview image is made on the fly and presented to the web app's users. Some pdfs might be on the large side, most will be under 50 MB but some extreme cases could be as large as a few hundred MB. A little waiting for the preview image for large pdf files is acceptable but no more than a minute let's say. Everything is running on one server for now, but soon the app will hit the server's limit on both storage and processing power. My idea to solve the problem: To deal with this situation I had the idea of having one or more pdf processing servers as needed, and one or more file storage servers. These two types of servers are mounted to the server on which the actual app runs using NFS. The app could then use GearMan to delegate pdf processing tasks to these processing servers. The processing server can mount the storage server and read the file stored there, process it and write its output to that server. The servers I'm talking about will be amazon ec2 instances. The web app returns a link to the resulting pdf preview image on the storage server that was used which can then be used on the front end to show the image to the user. My question: I have zero experience with apps that use multiple servers, is this idea viable or is there a better way to do it? Is an NFS setup fast and reliable enough for this situation?

    Read the article

  • arrays in puppet

    - by paweloque
    I'm wondering how to solve the following puppet problem: I want to create several files based on an array of strings. The complication is that I want to create multiple directories with the files: dir1/ fileA fileB dir2/ fileA fileB fileC The problem is that the file resource titles must be unique. So if I keep the file names in an array, I need to iterate over the array in a custom way to be able to postfix the file names with the directory name: $file_names = ['fileA', 'fileB'] $file_names_2 = [$file_names, 'fileC'] file {'dir1': ensure => directory } file {'dir2': ensure => directory } file { $file_names: path = 'dir1', ensure =>present, } file { $file_names_2: path = 'dir2', ensure =>present, } This wont work because the file resource titles clash. So I need to append e.g. the dir name to the file title, however, this will cause the array of files to be concatenated and not treated as multiple files... arghh.. file { "${file_names}-dir1": path = 'dir1', ensure =>present, } file { "${file_names_2}-dir2": path = 'dir1', ensure =>present, } How to solve this problem without the necessity of repeating the file resource itself. Thanks

    Read the article

  • Oh that XML - did you ever try to read a raw file?

    - by GGBlogger
    If you've ever looked at a raw XML file - even a very simple one - you'll understand. XML files are nearly impossible to read in raw format. That's where various tools come in and there are a bunch of them including some very simple tools. If, however, you need some horsepower one of the best tools on the planet is LiquidXML! LiquidXML is a developer's tool. It's also an analyst's tool, a tester's tool and a designer's tool. Did I mention that it is compatible with Visual Studio? Once again I will be following up on this as time permits. But if this sounds like something you can use just visit http://www.liquid-technologies.com/. You will find a very complete description plus high quality training videos that will help you decide if this is a tool you can use.

    Read the article

  • How can I initialize file names in the folders/ subfolders?

    - by user37991
    Sometimes Thunar allows 'camel-case' of file names, but only in the immediate folder, not the sub-folders. "Perl module to convert stringt to and from CamelCase" exists in Synaptic, but it is not powerful enough, or am I ignorant? The only way I can reliably do this is with Windows shareware (paid $$): Servant Salamander). That's why all my data & archives are on NTFS partitions. BTW: Synaptic has different entries for 'initialize' and 'initialise'. But I'll mention this in another question.

    Read the article

  • How do I handle "Firmware file X not found" stopping bootup?

    - by John Baber
    I recently upgraded my desktop mac to Precise and now can't get past the Ubuntu 12.04 splash page. The splash freezes with [ 19.931097] b32-phy0 ERROR: Firmware file "b43/ucode16_mimo.fw" not found [ 19.9311126] b32-phy0 ERROR: You must go to http://wireless.kernel.org/en/users/Drivers/b43#devicefirmware and download the correct firmware for this driver version. Please carefully read all instructions on this website. written on it This is when I choose 3.2.0-generic from the grub menu. If I choose to load up older kernels, I still get the same thing. How can I make my computer finish booting again? I'm able to ssh into the machine and poke around, but the actual screen freezes at the splash.

    Read the article

  • How to change mimetype image for PSD file (Photoshop)?

    - by ubuntico
    Following this link https://help.ubuntu.com/community/AddingMimeTypes, I first typed a command to discover the mimetype for psd extension ~$ grep 'psd' /etc/mime.types image/x-photoshop psd Then I took Photoshop CS5 SVG image from Wikipedia and renamed it to image-x-photoshop.svg. Then I copied the file to the folder /usr/share/icons/gnome/scalable via command sudo cp image-x-photoshop.svg /usr/share/icons/gnome/scalable/image-x-photoshop.svg Loogged out, logged in, but the icon for the .psd files is still unchanged. Does anyone know what I am doing wrong? I use Ubuntu 10.10. Thanks in advance

    Read the article

  • How to organise storage for media content such as video and music?

    - by thor
    Currently, we have a single server hosting all content: music, video and software. This content is downloaded by users through HTTP. Now free space is coming to an end and we are exploring different ways of extending our storage capacity. We want to do it cheap, simple and reliable (protected from disk/ server faults). Currenly, we see two ways: Add a couple of cheap servers with 4 disks (RAID1 ?), run some distributed file-system on top, like GlusterFS. Pros: hopefully, we will see all our disks as single flat file system, just dump content into it and be done. Cons: could be tricky in configuration and handling of faults. Add a couple of cheap servers, all running HTTP servers. Each piece of content (be it a music file or video) is placed on randomly selected two servers. Pros: don't have to deal with RAID, as content is duplicated; single server failure does not bring down any part of content; doubled distribution capacity (as any signle file could be downloaded from any of two servers hosting it). Cons: requires some scripting on part of distribution of content, adding/ removing servers. Do we miss any other ways? Which of the aforementioned options seems to be the best?

    Read the article

  • Why do I get "No root file system is defined" when I try install in one partition?

    - by Emilio
    The thing is that I have 3 partitions on my computer. /dev/sda1 Type: ntfs (size 104MB; 35MB in use) [This is Windows Loader] /dev/sda2 Type: ntfs (size 144598MB; 64536MB in use) [Here I want to install UBUNTU] /dev/sda3 Type: ntfs (size 105353MB; 20227MB in use) [This my backup partition I don't wan't to delete anything from here, I have all my necessary information] So the problem is when I select "Device for boot loader installation" "/dev/sda2" Pops out: "No root file system is defined. Please correct this from the partitioning menu." How can I resolve this? :)

    Read the article

  • Mount of File System Failed. -- After upgrading from 9.04 to 9.10

    - by javanoob
    After upgrading from Ubuntu 9.04 to 9.10 i am getting this message on boot up: Mount of File System Failed. A maintenance shell will now be started. CONTROL-D will terminate this shell and retry. myusername@root:~$ After searching in google,i found out that i have to run the command fsck on the OS partition from ubuntu Live CD.. Here are my questions: 1) I dont know in which partition my ubuntu OS is installed..(Those are not user-friendly drive names to remember right :)) -- Is there any command to know in which partition my ubuntu os is installed? 2) Can i do this from Ubuntu 10.04 live CD? Thanks in Advance

    Read the article

  • How do I combine all lines in a text file into a single line?

    - by John
    I want to get all lines in a text into one line. I'm a beginner at coding trying to learn by doing. I've spent four hours trying to solve this problem. I know there's a simple solution to this problem. Here's what I've been trying. sed -e 'N;s/\n//' myfile.txt #Does nothing sed -e :a -e N -e 's/\n/ /' -e ta myfile.txt #output all messed up and I can't make head nor tail of the syntax cat myfile.txt | tr -d '\n' myfile.txt # Deletes all lines Here's the text file: 500212 262578-4-4 23200 GRIFFITH LABORATORIES LTD GRIFFITH LABORATORIES SOUTH DUBLIN COUNTY COUNCIL OFFICE OFFICE (INDUSTRIAL) List Rateable 2 Pineview Industrial Estate Firhouse Road Knocklyon 31 Dec 2007 01 Jan 2008" I can't figure out where I've gone wrong....

    Read the article

  • Migrating to AWS Cloud with auto-scaling - where to put Redis and ElasticSearch?

    - by RobMasters
    I've been trying to research this topic but haven't found anywhere that recommends where to install services such as Redis and ElasticSearch when migrating to a cloud framework. I'm currently running a Symfony2 application on 2 static servers - one is running MySQL and the other is the public facing web server, which also has Redis and ElasticSearch running on it. Both of these servers are virtualised, but they're static in terms of not being able to replicate at present (various aspects are still dependent on the local filesystem). The goal is to migrate to AWS and use auto-scaling to be able to spin up and kill web servers as required, but I'm not clear on what I should put on each EC2 instance. Should they be single-responsibility only? i.e. Set up individual instances for the web server(s), Redis, and ElasticSearch and most likely an RDS instance for MySQL and only set up auto-scaling on the web server(s)? I don't foresee having to scale the ElasticSearch server anytime soon as it's only driving the search functionality, but it's possible that Redis may need to be replicated at some point - but should this be done manually? I'm not sure of how this could be done automatically as each instance needs to be configured to know about it's master/slave(s) as far as I know. I'd appreciate advice on this. One more quick question while I'm here - how would I be able to deploy code changes when there are X web servers currently active? I'm using a Capifony deployment script (Symfony2 version of Capistrano), which I think can handle multiple servers easily enough by specifying an array of :domain addresses...but how can should this be handled when the number of web servers can vary?

    Read the article

  • Why does nothing work after I randomly changed some file permissions?

    - by Josh B
    Ok so last night i was trying to set permissions to some folders in my File System, since apparently im not an admin on my own computer. And now everything got messed up today and i don't know what to do.. I lost my internet, the icon is not showing in the taskbar anymore. I lost my sound, there is no sound devices listed when i go into the sound menu. I can not log into root anymore, it gives me "sudo: must be setuid root" I can not plug anything in anymore, it will not recognize flash drives or external hard drives. It gives me a Internal Error message everytime i log in It doesn't let me log into the Grub screen anymore on boot up. What did i do? I have a lot of files on here i wish to put on a flash drive but it won't recognize it.

    Read the article

  • IIS 7 and ASP.NET State Service Configuration

    - by Shawn
    We have 2 web servers load balanced and we wanted to get away from sticky sessions for obvious reasons. Our attempted approach is to use the ASP.NET State service on one of the boxes to store the session state for both. I realize that it's best to have a server dedicated to storing sessions but we don't have the resources for that. I've followed these instructions to no avail. The session still isn't being shared between the two servers. I'm not receiving any errors. I have the same machine key for both servers, and I've set the application ID to a unique value that matches between the two servers. Any suggestions on how I can troubleshoot this issue? Update: I turned on the session state service on my local machine and pointed both servers to the ip address on my local machine and it worked as expected. The session was shared between both servers. This leads me to believe that the problem might be that I'm not using a standalone server as my state service. Perhaps the problem is because I am using the ip address 127.0.0.1 on one server and then using a different ip address on the other server. Unfortunately when I try to use the network ip address as opposed to localhost the connection doesn't seem to work from the host server. Any insight on whether my suspicions are correct would be appreciated.

    Read the article

  • How to set PcManFm as the default file manager?

    - by JarekJ83
    I think Nautilus is so slow, and I'd like to move to PCmanFM, but didn't find any good tips how to do this in Ubuntu 12.10. I have PCmanFM installed already, and I even changed: $ sudo gedit /usr/share/applications/nautilus-folder-handler.desktop [Desktop Entry] Name=Files Comment=Access and organize files Exec=pcmanfm %U Icon=system-file-manager Terminal=false NoDisplay=true Type=Application StartupNotify=true OnlyShowIn=GNOME;Unity; Categories=GNOME;GTK;Utility;Core; MimeType=inode/directory;application/x-gnome-saved-search; X-GNOME-Bugzilla-Bugzilla=GNOME X-GNOME-Bugzilla-Product=nautilus X-GNOME-Bugzilla-Component=general X-GNOME-Bugzilla-Version=3.2.1 X-Ubuntu-Gettext-Domain=nautilus Still slow Nautilus is default one.

    Read the article

  • Managing rolling deployments in the cloud

    - by Josh Nankin
    Recently I've been experimenting with various cloud management tools like RightScale, Scalr, custom scripts for managing a variety of servers, each hosting several roles (app, db, load balancer, job queues, etc). The one thing I find lacking in most solutions is a way to do rolling deployments, i.e. running deployments sequentially across a number of servers with the same role. For instance, I dont want to build all of my webservers at the same time, as that will almost definitely result in some down time or 500s for my customers. I'd rather have one or two servers build at a time, while other servers are still available to handle requests. The other alternative is obviously to launch new servers that automatically update themselves on boot, but this isn't as cost effective, and most likely requires more time for the build to complete (it's faster to build on an existing server than to launch a new server and kill old ones). We've all heard of the big companies having the famous "push to build" button (companies like Twilio, Etsy, etc.) but it seems that they all have custom implementations of this. I'm not talking about a simple ssh-loop, clusterssh, or even an mcollective - I preferably want something with a nice simple interface that allows me to specify something like a RightScript or a Scalr script to run on a set of servers with a specific role, and it builds them sequentially. Does any one know of easy ways to get this done, or is this a candidate for a new open source project?

    Read the article

  • Is acousting fingerprinting too broad for one audio file only?

    - by IBG
    So we were looking for some topics related to audio analysis and found acoustic fingerprinting. As it is, it seems like the most famous application for it is for identification of music. Enter our manager, who requested us to research and possible find an algorithm or existing code that we can use for this very simple approach (like it's easy, source codes don't show up like mushrooms): Always-on app for listening Compare the audio patterns to a single audio file (assume sound is a simple beep) If beep is detected, send notification to server With a flow this simple, do you think acousting fingerprinting is a broad approach to use? Should we stop and take another approach? Where to best start? We haven't started anything yet (on the development side) on this regard, so I want to get other opinion if this is pursuit is worth it or moot.

    Read the article

  • New file in ubuntu kernel ppa repository to install?

    - by Nikki Kononov
    Googling this question hasn't brought me to anything so I decided to ask here. Recently I noticed on kernel.org that kernel 3.7 finally considered to be a stable so I decided to upgrade, since I am on 3.7 RC7 but when I opened http://kernel.ubuntu.com/~kernel-ppa/mainline/v3.7.1-raring/ I noticed new file (actually 2 which seems to be identical) linux-headers-3.7.1-030701-omap_3.7.1-030701.201212171620_armhf.deb and linux-image-3.7.1-030701-omap_3.7.1-030701.201212171620_armhf.deb so my question is should I install those as well or continue installing only 4 files (e.g. 3 AMD64 image and headers + all headers)? Thank you.

    Read the article

  • Did my registrar screw up or is this how name server propagation works?

    - by Brad
    So my company has a number of domains with a large registrar that shall go unnamed. We are making some changes to our DNS infrastructure and the first of those is we are moving our secondary DNS from one server on site to four servers offsite. So we updated the name servers for each domain at the registrar by removing the entry for the old secondary name server and adding the four new ones. I monitored the old secondary server for requests and when I saw no new requests had been made for 24 hours I shut it down. That was this morning. I assumed at this point everything was good. Unfortunately this was my mistake. I should have gone and made sure name servers at large were returning the correct NS records. So this afternoon we were performing maintenance on our primary DNS server and we shut it down. This is when I started getting alerts from our external monitoring. I checked and sure enough, the DNS server used there reported the only NS record for our primary domain was the primary name server. The new secondary servers were not listed and neither was the old secondary. Is it unreasonable of me to have assumed that because the update was from ns1.mydomain.com ns2.mydomain.com to ns1.mydomain.com ns1.backupdns.com ns2.backupdns.com ns3.backupdns.com ns4.backupdns.com in one step at the registrar that there should be no intermediate state where the only NS record was for ns1.mydomain.com? Going forward to be safe obviously I will always leave the old name servers alone until after I'm 100% sure the new ones have propagated and only then remove the old name servers from the registrar. However, I'd still like to know if my registrar screwed up or if my expectation was unreasonable.

    Read the article

  • Is acoustic fingerprinting too broad for one audio file only?

    - by IBG
    We were looking for some topics related to audio analysis and found acoustic fingerprinting. As it is, it seems like the most famous application for it is for identification of music. Enter our manager, who requested us to research and possible find an algorithm or existing code that we can use for this very simple approach (like it's easy, source codes don't show up like mushrooms): Always-on app for listening Compare the audio patterns to a single audio file (assume sound is a simple beep) If beep is detected, send notification to server With a flow this simple, do you think acoustic fingerprinting is a broad approach to use? Should we stop and take another approach? Where to best start? We haven't started anything yet (on the development side) on this regard, so I want to get other opinion if this is pursuit is worth it or moot.

    Read the article

  • Slow RDP after server joins domain

    - by Chris Grove
    We're having RDP issues with Amazon cloud servers that we recently joined to an Active Directory domain. The setup is: A local office network A virtual private cloud in Amazon An IPSec tunnel between the two networks A number of Windows 2008 R2 servers on both networks An AD domain (call it abc.net), with one domain controller in each network. The domain controllers are both new, fresh installs. Before we had the domain set up we had local accounts for the cloud computers which were used for RDP access. Our idea was to get all of the servers on to the domain so we could use domain logins instead of per-server local logins. Before the cloud servers were in the domain, RDP (from the office network or through a VPN to the cloud) worked great. After we joined the cloud servers to the domain, RDP from the office became very slow - a few minutes to log in, long frequent pauses when the interface is unresponsive, generally just a slow and frustrating experience. This is a problem regardless of whether a domain or local login is used for RDP. Oddly, when outside of the office network and connecting to the cloud directly with the VPN, RDP is still very responsive. Any idea why RDP from office to cloud is suddenly very slow after the cloud servers join the domain? What can I look at in our configuration to address this? Any help is greatly appreciated.

    Read the article

  • Remote access to phpmyadmin from computer belongs to same LAN

    - by Charles
    OK... I solved it. It is because I have not configured the httpd.conf to allow the centos listen port 80 and 8080. Listen 80 Listen 8080 I have setup the myphpadmin on my CentOS 6.4 recently. I can access and login to the myphpadmin on my localhost. However, when I type http://[hostipaddr]/phpmyadmin on my other computer in the same LAN with the CentOS, the browser simply cannot access the page. Below are some of the current configuration. Anyone can help please......? config.inc.php $i++; /* Authentication type */ $cfg['Servers'][$i]['auth_type'] = 'http'; /* Server parameters */ $cfg['Servers'][$i]['host'] = 'localhost'; $cfg['Servers'][$i]['connect_type'] = 'tcp'; $cfg['Servers'][$i]['compress'] = false; /* Select mysql if your server does not have mysqli */ $cfg['Servers'][$i]['extension'] = 'mysql'; $cfg['Servers'][$i]['AllowNoPassword'] = false; phpmyadmin.conf <Directory /var/www/html/phpmyadmin/> order allow,deny allow from all </Directory> Furthermore, I can access the webpage that stored in the CentOS from my other computer without problems. After using wireshark and tcpdump, I found that the server (the Cent OS) keep resetting the connection. (192.168.1.106 is my other computer, 192.168.1.101 is my CentOS) 23:29:42.281473 IP 192.168.1.106.55999 > 192.168.1.101.webcache: Flags [S], seq 2559409090, win 65535, options [mss 1460,nop,wscale 8,nop,nop,sackOK], length 0 23:29:42.281504 IP 192.168.1.101.webcache > 192.168.1.106.55999: Flags [R.], seq 0, ack 2559409091, win 0, length 0 I have disabled the iptables service on the CentOS already.

    Read the article

  • Synchronize Dreamweaver over an SSH tunnel using an SFTP connection

    - by Aeo
    Maybe... Just maybe... I'm asking too much here. Maybe I'm even barking up the wrong tree. I'm looking to essentially have Dreamweaver establish an SSH tunnel to one machine, and then use that connection to synchronize a site that is on another machine entirely. Now for some details: We've got two connections here at work. We've got our office connection for day to day business, and then we've got some fancy connection hosting our web servers upstairs. For the most part they've been mutually exclusive until recently. We had been establishing an SFTP connection to synchronize our web sites by going out over the office connection to the web and coming back in over the fancy connection to our servers upstairs. Recently -ish, we established a LAN connection to one of our servers that makes a pleasant change in VNC connection quality. Thanks to Vinagre, this makes it really easy to connect to any of our servers over this LAN connection via SSH tunnel for VNC. However, in spite of that new addition of a LAN connection, we still synchronize over the 'net. Out the office connection and in on the fancy one upstairs. I'm looking to change this. I'd like to get Dreamweaver to first tunnel over our LAN connection to the servers, and then go from there to whatever connection it needs to. Am I asking too much? The current set up: Dreamweaver is installed on Windows XP which is running within VirtualBox on top of Ubuntu 10.10. The network connection for VirtualBox is currently made in NAT mode, but could easily be switched to a Bridged Connection should it need be. The LAN connection is to 1 of 5 servers running CentOS 5.

    Read the article

  • How do you load in a .bmp file to use it as a height map in C++?

    - by user1324894
    I need to create a terrain that can be used in C++ and opengl and I am told that using a height map is the way to go. I have a .bmp file that is a greyscale image that I would like to use. However, I do not know how to load that greyscale image into the project so that it can be used as a height map or how I can apply a texture to it to create terrain nor how to compute the normals that I'm told will be required. How would I go about achieving what I would like to do?

    Read the article

< Previous Page | 552 553 554 555 556 557 558 559 560 561 562 563  | Next Page >