Search Results

Search found 26176 results on 1048 pages for 'stream socket client'.

Page 606/1048 | < Previous Page | 602 603 604 605 606 607 608 609 610 611 612 613  | Next Page >

  • E-commerce for custom orders/customer image upload

    - by ansarob
    We have a client that needs an e-commerce site set up pretty quickly. As I have no experience with e-commerce, I am looking for some guidance. Basically, the two big features we need are: Ability for customer to add info about order (example: the name the customer wants to be put on the customizable product they ordered) Ability for customer to upload photo of product to be customized I hope this makes sense. Right now I am really looking into Shopify, but I can't tell if it does everything we need. I know you can add order notes when checking out, but not sure about image upload (maybe it can be added as an app through the API?).

    Read the article

  • Fastest way to run a JSON server on my local machine

    - by Mohsen
    I am a front-end developer. For many experiemnets I do I need to have a server that talks JSON with my client side app. Normally that server is a simple server that response to my POSTs and GETs. For example I need to setup a server that saves, modifies and read data from a "library" database like this: POST /books create a book GET /book/:id gets a book and so on... What is the fastest and easiest technology stack for database and server in this case? I am open to use Ruby, Nodejs and anything that do the job fast and easy. Is there any framework (on any language) that do stuff like this for me?

    Read the article

  • PostgreSQL pg_hba.conf with "password" auth wouldn't work with PHP pg_connect?

    - by tftd
    I've recently experimented with the settings in pg_hba.conf. I read the PostgreSQL documentation and I though that the "password" auth method is what I want. There are many people that have access to the server PostgreSQL is working on so I don't want the "trust" method. So I changed it. But then PHP stopped working with the database. The message I get is "Warning: pg_connect(): Unable to connect to PostgreSQL server: FATAL: password authentication failed for user "myuser" in /my/path/to/connection/class.php on line 35". It is kind of strange because I can connect via phppgadmin without any problems and also I can connect from my home computer with psql - again without any problems. This is my pg_hba.conf: # TYPE DATABASE USER CIDR-ADDRESS METHOD # "local" is for Unix domain socket connections only local all all password # IPv4 local connections: host all all 127.0.0.1/32 password # IPv6 local connections: host all all ::1/128 password The connection string I'm using with pg_conenct is: $connect_string = "host=localhost port=5432 dbname=mydbname user=auser password=apassword"; $dbConnection = pg_connect($connection_string); Does anybody know why is this happening ? Did I misconfigured something ?

    Read the article

  • What's up with all the updates? [closed]

    - by Bob Babb
    I use Ubuntu exclusively for my job, especially for the fact that everything works and I get the most out of my processor and memory, but you are killing me with updates! I just lost a very good opportunity from a client that installed Ubuntu but got tired of all the updates. I really can't argue the fact. In a matter of a day I had two software updates. Quote from customer: "It's sad that I come in at 6:00 in the morning to install updates from a LTS version, and then before I leave at the end of the day I have 19 new updates to install. At least Microsoft bundles them in controllable groups." Sadly I have to agree, guys you have to do something about this. Please!

    Read the article

  • how does server communication work in a flash game with a php backend

    - by Tim Rogers
    I am trying to create a browser game using actionscript/flash. Currently, I'm trying to understand how I would go about creating a back-end which interfaced with my MySQL database. As far as I understand, If I create a php file on a webserver called test.php and then navigate to a webpage hosted on the server eg. www.example.com/test, the php script will run and display the result in my browser. This would use http. Is this how communication between client and server usually works in a flash game? for example, if the game needed to query the db. Would actionscript have to essentially invoke the url of the php script that would execute the query? it could then parse the data and use it. If this is the case, then is JSON considered a good way to transfer data over http?

    Read the article

  • FreeBSD rc.d script doesn't work when starting up

    - by kastermester
    I am trying to write a rc.d script to startup the fastcgi-mono-server4 on FreeBSD when the computer starts up - in order to run it with nginx. The script works when I execute it while being logged in on the server - but when booting I get the following message: eval: -applications=192.168.50.133:/:/usr/local/www/nginx: not found The script looks as follows: #!/bin/sh # PROVIDE: monofcgid # REQUIRE: LOGIN nginx # KEYWORD: shutdown . /etc/rc.subr name="monofcgid" rcvar="monofcgid_enable" stop_cmd="${name}_stop" start_cmd="${name}_start" start_precmd="${name}_prestart" start_postcmd="${name}_poststart" stop_postcmd="${name}_poststop" command=$(which fastcgi-mono-server4) apps="192.168.50.133:/:/usr/local/www/nginx" pidfile="/var/run/${name}.pid" monofcgid_prestart() { if [ -f $pidfile ]; then echo "monofcgid is already running." exit 0 fi } monofcgid_start() { echo "Starting monofcgid." ${command} -applications=${apps} -socket=tcp:127.0.0.1:9000 & } monofcgid_poststart() { MONOSERVER_PID=$(ps ax | grep mono/4.0/fastcgi-m | grep -v grep | awk '{print $1}') if [ -f $pidfile ]; then rm $pidfile fi if [ -n $MONOSERVER_PID ]; then echo $MONOSERVER_PID > $pidfile fi } monofcgid_stop() { if [ -f $pidfile ]; then echo "Stopping monofcgid." kill $(cat $pidfile) echo "Stopped monofcgid." else echo "monofcgid is not running." exit 0 fi } monofcgid_poststop() { rm $pidfile } load_rc_config $name run_rc_command "$1" In case it is not already super clear, I am fairly new to both FreeBSD and sh scripts, so I'm kind of prepared for some obvious little detail I overlooked. I would very much like to know exactly why this is failing and how to solve it, but also if anyone has a better way of accomplishing this, then I am all open to ideas.

    Read the article

  • Repacked proprietary software keeps updating the same deb

    - by Johannes
    I repacked a proprietary program delivered as tar file to a deb file for having a company wide repository. I used reprepro to set up a repository and signed it. A unix timestamp is faking a versioning numbering, so I can have different (real) versions installed at the same time. Almost everything works as expected. The deb file looks like this: mysoft8.0v6_1366455181_amd64.deb Only problem on a client machine it tries to install the same deb file over and over again because it thinks its an update. What do I miss: control file in deb package looks like this: Package: mysoft8.0v6 Version: 1366455181 Section: base Priority: optional Architecture: amd64 Installed-Size: 1272572 Depends: Maintainer: me Description: mysoft 8.0v6 dpkg repackaging and the config in the repository: /mirror/mycompany.inc/conf/distributions: Origin: apt.mycompany.inc Label: apt repository Codename: precise Architectures: amd64 i386 Components: main Description: Mycompany debian/ubuntu package repo SignWith: yes Pull: precise Help much appreciated Added guide: This Is the guide I used to create the repository.

    Read the article

  • Forward .html/.htm to .php with .config

    - by PhilipK
    I'm moving a site from my linux hosted server to a client's windows hosted server. The .htaccess file no longer works and I'm told that windows servers use .config . How can I forward all users accessing .html & .htm files to the equivalent .php file. Server Info... OS/Hosting Type: Windows / Shared Hosting .Net Runtime Version: ASP.Net 2.0/3.0/3.5 PHP Version: PHP 5.2 IIS Version: IIS 7.0 Data Center: US Regional EDIT *Hosting provided by GoDaddy Was told by a friend following should work but it has no effect on the site. <configuration> <system.webServer> <handlers> <add name="PHP-FastCGI" verb="*" path="*.html" modules="FastCgiModule" scriptProcessor="c:\php\php-cgi.exe" resourceType="Either" /> </handlers> </system.webServer> </configuration>

    Read the article

  • root folder php scripts not running in nginx

    - by Thermionix
    nginx with php-fpm on ubuntu 12.04 server. attempting to access /var/www/test.php (via https://example.net/test.php) downloads the script instead of executing it. if I place the test.php in a subdirectory, i.e. /var/www/test/test.php it executes. root.conf; root /var/www; include php-fpm.conf; location ~ /\. { access_log off; log_not_found off; deny all; } php-fpm.conf; location ~ \.php$ { try_files $uri =404; fastcgi_pass unix:/var/run/php5-fpm.socket; include fastcgi_params; } fastcgi_params; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_index index.php; fastcgi_param HTTPS on; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; #fastcgi_param SCRIPT_FILENAME $request_filename; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; # PHP only, required if PHP was built with --enable-force-cgi-redirect fastcgi_param REDIRECT_STATUS 200;

    Read the article

  • Dovecot authentification not working

    - by user1488723
    I run a Ubuntu 10.04 VPS with Postfix and Dovecot installed. For a while I had problems with the mailserver itself (Postfix) but now it runs ok. I can telnet into it from localhost (telnet localhost 25 while logged in) and Im blocked if I try to do it from the outside (telnet mail.example.org 25). This is as it should be according to my main.cf However when I try to log in using Dovecot (openssl s_client -connect mail.example.com:993) I'm allowed in but denied when trying to identify myself as a user: Excerpt from Dovecot log in: Key-Arg : None Start Time: 1341074622 Timeout : 300 (sec) Verify return code: 18 (self signed certificate) OK [CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID ENABLE AUTH=PLAIN AUTH=LOGIN] Dovecot ready. When I continue and try to log in to a specific user with the command: A001 login user password I get: A001 NO [AUTHENTICATIONFAILED] Authentication failed. I've reset the password to ensure it is correct and I know the user (user) exists on the system. When I do /etc/init.d/dovecot reload I get: /etc/init.d/dovecot: 29: maildir:~/Maildir: not found * Reloading IMAP/POP3 mail server dovecot [ OK ] Could it be that the mailboxes isn't found? Postfix main.cf: home_mailbox = Maildir/ mailbox_command = recipient_delimiter = + inet_interfaces = all smtpd_use_tls = yes smtpd_tls_auth_only = no smtpd_tls_loglevel = 1 smtpd_tls_cert_file = /etc/postfix/ssl/smtpd.crt smtpd_tls_key_file = /etc/postfix/ssl/smtpd.key smtpd_tls_CAfile = /etc/postfix/ssl/cacert.pem smtpd_sasl_auth_enable = yes smtpd_client_restrictions = permit_sasl_authenticated, permit_mynetworks, reject_unauth_destination smtpd_sender_restrictions = permit_sasl_authenticated, permit_mynetworks smtpd_recipient_restrictions = permit_mynetworks, permit_sasl_authenticated, reject_unauth_destination broken_sasl_auth_clients = yes smtpd_sasl_type = dovecot smtpd_sasl_path = private/auth smtpd_sasl_security_options = noanonymous smtpd_sasl_local_domain = $mydomain Dovecot.conf: protocols = imap imaps disable_plaintext_auth = no log_timestamp = "%b %d %H:%M:%S " ssl = yes ssl_cert_file = /etc/postfix/ssl/smtpd.crt ssl_key_file = /etc/postfix/ssl/smtpd.key mail_location = maildir:~/Maildir auth_verbose = yes mail_access_groups = mail auth_username_chars = abcdefghijklmnopqrstuvwxyz0123456789 protocol imap { imap_client_workarounds = delay-newmail tb-extra-mailbox-sep } auth default { mechanisms = plain login passdb pam { } userdb passwd { } socket listen { client { path = /var/spool/postfix/private/auth user = postfix group = postfix mode = 0660 } } }

    Read the article

  • Correctly indexing multiple domains with same content in Google and others

    - by AJweb
    I have a client with a dozen territorial domains, like mydomain.co.uk, mydomain.fr, mydomain.de, etc Most of these domains hold a different language of the same dynamic content (shop), but some, like co.uk and .com, have the same language and content, except for some content customized to each country/domain in the front page, contact and other pages. I am aware that we should use the canonical meta tag to mark those duplicated contents, but, we want the co.uk to be present in UK ( indexed in google.co.uk ) and the .com to be present in US and other countries, for example, or least that is the goal. Is there anything we can do to "help" google determine the geographical meaning of each domain? If we mark with canonical tag the .com and co.uk sites, do you know how google will decide which one to show on a given search?

    Read the article

  • Text reverses on remote gnome session

    - by Andrew Stern
    I have two computers running 10.4 . The first machine is a wired desktop with sshd. The second is a wifi connected laptop with the ssh client. When I use my laptop to bring up a remote gnome session to my desktop all the text gets reversed. Steps: 1) login as a user on the laptop to activate the wifi with a stored key. 2) goto a console Ctrl-Alt F1 3) do a xterm -- :1 to bring up a blank graphic session 4) ssh -Y user@desktopmachine gnome-session This shows reversed text and messes up the keyboard so I can't type

    Read the article

  • Extracting GPS Data from JPG files

    - by Peter W. DeBetta
    I have been very remiss in posting lately. Unfortunately, much of what I do now involves client work that I cannot post. Fortunately, someone asked me how he could get a formatted list (e.g. tab-delimited) of files with GPS data from those files. He also added the constraint that this could not be a new piece of software (company security) and had to be scriptable. I did some searching around, and found some techniques for extracting GPS data, but was unable to find a complete solution. So, I did...(read more)

    Read the article

  • How to connect to a WCF service using IP of the host machine where the service is hosted?

    - by Kumar
    I have a secured WCF service (https://<MachineName>:sslport/services) self hosted in a machine. Different instances of same service are deployed in differnt machines. From a client app, I am able to connect to theses services through code, i.e. using ChannelFactory() with the same endpoint address. But if I try to access the service using the endpoint address as https://<ipAddress>:sslport/services replacing machines name with machine IP address, I am getting some error stating "could not establish trust relationship". I know this is an error caused by SSL certificate that it could not establish a trust relationship. Are there any settings or any possibilities to make this work?

    Read the article

  • How do I enable the Ubuntu One tray applet?

    - by Richard Holloway
    I am running 10.04 and I am unable to get a tray applet to appear for Ubuntu One. I am sure there was an applet in 9.04 (Jaunty) and 9.10 (Karmic). I have the package ubuntuone-client-gnome installed which Synaptic tells me "This package contains the tray applet and Nautilus extension, providing integration with the GNOME desktop." The applet is not on the "Add to panel..." list and there doesn't appear to be anything in the menus. So how do I make the applet appear?

    Read the article

  • Can empathy be configured to stay updated will a full conversation?

    - by kas
    The Google Talk web app and Android app will always update themselves with all messages sent from all clients and I was wondering if Empathy can be configured to do this. For example, if I start a conversation on my phone and then change to my PC an use the web app, all the messages I sent will show up in the IM window on the PC even though I sent them when I was on my phone. Empathy does not behave this way. It only shows the IMs that occurred when using Empathy. If Empathy cannot do this, is there another client that can?

    Read the article

  • MVC helper functions business logic

    - by Menelaos Vergis
    I am creating some helper functions (mvc.net) for creating common controls that I need in almost every project such as alert boxes, dialogs etc. If these do not contain any business logic and it's just client side code (html, js) then it's ok. My problem arises when I need some business logic behind this helper. I want to create a 'rate my (web) application' control that will be visible every 3 days and the user may hide it for now, navigate to rate link or hide it for ever. To do this I need some sort of database access and a code that acts as business logic. Normally I would use a controller for this, with my DI and everything, but I don't know where to put this code now. This should be placed in the helper function or in a controller that responds objects instead of ActionResults?

    Read the article

  • Internal and external API architecture

    - by Tacomanator
    The company I work for maintains a successful SaaS product that grew "organically" over the years. We are planning to expand the line with a suite of new products that will share data with the existing product. To support this, we are looking to consolidate business logic into a single place: a web service layer. The WS layer will be used by: The web applications A tool to import data A tool to integrate with other client software (not an API per se) We also want to create an API that can be used by our customers that are capable of using it to create their own integrations. We are struggling with the following question: Should the internal API (aka the WS layer) and the external API be one in the same, with security and permission settings to control what can be done by who, or should they be two separate applications where the external API just calls the internal API like any other application? So far in our debate it seems that separating them may be more secure, but will add overhead. What have others done in a similar situation?

    Read the article

  • Issues with the intended behavior of a Service layer?

    - by Rafael Cichocki
    This analysis makes sense, and states anything that avoids code duplication and simplifies maintenance speaks for a service layer. What is the technical behavior? When a service client references a service, does it do so at runtime, or does it happen at compile time? When I change something in the service layer code, will this change be automatically taken into account in all it's clients, or do they need to be individually recompiled? How does this make sense from a testing point of view - I have working code, based on some code from a service, but if that service changes, my code might break?!

    Read the article

  • Network Load Balancing and AnyCast Routing

    - by user126917
    Hi All can anyone advise on problems with the following? I am planning on installing the following setup on my estate: I have 2 sites that both have a large amount of users. Goals are to keep things simple for the users and to have automatic failover above the database level. Our Database will exist at the primary site and be async mirrored to the secondary site with manual failover procedures.The database generate sequential ID's so distributing it is not an option. I plan to site IIS boxes at both sites with all of the business logic on them and heavy operations. The connections to SQL will be lightweight and DB reads will be cached on IIS. On this layer I plan to use Windows network load balancing and have the same IP or IPs across all IIS boxes at both sites. This way there will be automatic failover and no single point of failure. Also users can have one web address regardless of which site they are in automatically be network load balanced to their local IIS. This is great but obviously our two sites are on different subnets and as this will be one IP address with most of our traffic we can't go broadcasting everything across the link between the sites. To solve this problem we plan to use AnyCast routing over our network layer to route the traffic to the most local box that is listening which will be defined by the network load balancing. Has anyone used this setup before? Can anyone think of any issues with this? Also some specifics I can't find anywhere at the moment. If my Windows box is assigned an IP and listening on that IP but network load balancing is not accepting specific traffic then will AnyCast route away from that? Also can I AnyCast on a socket level?

    Read the article

  • what determines the bios you can use

    - by andy
    I have a Compaq sr5710f with a MCP61PM-HM (Iris8) motherboard and loaded a Geforce6100sm-m bios on to it. It works fine and opened up some options, but I want to give my comp am3 socket support. So my question is if I go looking for a bios that will do that for me what do I need to pay attention to. Is it only the chip set which is geforce 6150se nforce 405 or are there more things I should be looking at. If anyone thinks they might know a bios that will help me out that would be good to. I am looking for a retail bios though. I do not want to load any of the experimental ones you can find on the net. Also I would need it to support ddr2 800 ram, and would like to keep am2 support to. I know my bios is data that is downloaded onto my motherboard, or for lack of a better word a program. I am currently running a bios that is not for my motherboard, so I know it can be done. What I need is what do I need to pay attention to when looking for compatible bios programs, that is not for my motherboard.

    Read the article

  • My error with upgrading 4.0 to 4.2- What NOT to do...

    - by Steve Tunstall
    Last week, I was helping a client upgrade from the 2011.1.4.0 code to the newest 2011.1.4.2 code. We downloaded the 4.2 update from MOS, upload and unpacked it on both controllers, and upgraded one of the controllers in the cluster with no issues at all. As this was a brand-new system with no networking or pools made on it yet, there were not any resources to fail back and forth between the controllers. Each controller had it's own, private, management interface (igb0 and igb1) and that's it. So we took controller 1 as the passive controller and upgraded it first. The first controller came back up with no issues and was now on the 4.2 code. Great. We then did a takeover on controller 1, making it the active head (although there were no resources for it to take), and then proceeded to upgrade controller 2. Upon upgrading the second controller, we ran the health check with no issues. We then ran the update and it ran and rebooted normally. However, something strange then happened. It took longer than normal to come back up, and when it did, we got the "cluster controllers on different code" error message that one gets when the two controllers of a cluster are running different code. But we just upgraded the second controller to 4.2, so they should have been the same, right??? Going into the Maintenance-->System screen of controller 2, we saw something very strange. The "current version" was still on 4.0, and the 4.2 code was there but was in the "previous" state with the rollback icon, as if it was the OLDER code and not the newer code. I have never seen this happen before. I would have thought it was a bad 4.2 code file, but it worked just fine with controller 1, so I don't think that was it. Other than the fact the code did not update, there was nothing else going on with this system. It had no yellow lights, no errors in the Problems section, and no errors in any of the logs. It was just out of the box a few hours ago, and didn't even have a storage pool yet. So.... We deleted the 4.2 code, uploaded it from scratch, ran the health check, and ran the upgrade again. once again, it seemed to go great, rebooted, and came back up to the same issue, where it came to 4.0 instead of 4.2. See the picture below.... HERE IS WHERE I MADE A BIG MISTAKE.... I SHOULD have instantly called support and opened a Sev 2 ticket. They could have done a shared shell and gotten the correct Fishwork engineer to look at the files and the code and determine what file was messed up and fixed it. The system was up and working just fine, it was just on an older code version, not really a huge problem at all. Instead, I went ahead and clicked the "Rollback" icon, thinking that the system would rollback to the 4.2 code.   Ouch... What happened was that the system said, "Fine, I will delete the 4.0 code and boot to your 4.2 code"... Which was stupid on my part because something was wrong with the 4.2 code file here and the 4.0 was just fine.  So now the system could not boot at all, and the 4.0 code was completely missing from the system, and even a high-level Fishworks engineer could not help us. I had messed it up good. We could only get to the ILOM, and I had to re-image the system from scratch using a hard-to-get-and-use FishStick USB drive. These are tightly controlled and difficult to get, almost always handcuffed to an engineer who will drive out to re-image a system. This took another day of my client's time.  So.... If you see a "previous version" of your system code which is actually a version higher than the current version... DO NOT ROLL IT BACK.... It did not upgrade for a very good reason. In my case, after the system was re-imaged to a code level just 3 back, we once again tried the same 4.2 code update and it worked perfectly the first time and is now great and stable.  Lesson learned.  By the way, our buddy Ryan Matthews wanted to point out the best practice and supported way of performing an upgrade of an active/active ZFSSA, where both controllers are doing some of the work. These steps would not have helpped me for the above issue, but it's important to follow the correct proceedure when doing an upgrade. 1) Upload software to both controllers and wait for it to unpack 2) On controller "A" navigate to configuration/cluster and click "takeover" 3) Wait for controller "B" to finish restarting, then login to it, navigate to maintenance/system, and roll forward to the new software. 4) Wait for controller "B" to apply the update and finish rebooting 5) Login to controller "B", navigate to configuration/cluster and click "takeover" 6) Wait for controller "A" to finish restarting, then login to it, navigate to maintenance/system, and roll forward to the new software. 7) Wait for controller "A" to apply the update and finish rebooting 8) Login to controller "B", navigate to configuration/cluster and click "failback"

    Read the article

  • SEO blog Indexing: wordpress.com subdomain vs a registered domain?

    - by rumspringa00
    I've used WordPress for a few of my client's sites, mostly small businesses and eCommerce sites. I have found through Google Analytics as well as the All in One Webmaster plugin that when it comes to social media, using WordPress is a surefire way of getting your site indexed by Google and occasionally Bing and Yahoo. Since I am a heavy WP user, I'd like to contribute by registering a dot WordPress domain for my portfolio. When using a WP installation concurrently with a WP domain, e.g. myportfolio.wordpress.com, will the site be more or less likely to be indexed rather a generic myportfolio.com domain? I've seen mixed opinions where people seem to favor a WP domain for URL output where others say that it's a moot point, and that Google will not favor a WP domain over a dot com domain as long as your meta tags are updated and content is keyword optimized. I tend to disagree and believe a WP domain would more likely be indexed and output more URLs over an individual, laconic domain like myportfolio.com. Am I wrong?

    Read the article

  • Why can't I get a working session with vnc4server

    - by ysap
    We have a couple of (identical) Ubuntu 11.10 machines, configured with gnome-classic, which we use as remote servers, and let our clients log into personal user accounts we create for them using vnc4server. We configured all the machines in the same way, following a short manual we compiled, describing how to download, install and prepare a few tools and our software. The connection usually works fine, but today I set up a fresh machine, and experienced problems. After installing vnc4server, I ran vncpasswd and copied the following startup file to ~/.vnc/xstartup: #!/bin/sh unset SESSION_MANAGER unset DBUS_SESSION_BUS_ADDRESS gnome-session --session=gnome-classic & [ -x /etc/vnc/xstartup ] && exec /etc/vnc/xstartup [ -r $HOME/.Xresources ] && xrdb $HOME/.Xresources xsetroot -solid grey vncconfig -iconic & Then, I started vnc4server and used two viewers (the Ubuntu Remote Desktop Viewer and Windows RealVNC Client) in two other machines, but instead of getting my desktop, I see an empty window with a grey-ish background pattern like this: and the cursor is a bold X. What is wrong with the setup and why don't I get a remote session as expected?

    Read the article

  • Track url from Amazon S3 using Google Analytics

    - by morktron
    I couldn't find any decent pay per view video solutions for low budget clients. So I'm considering using a membership extension with Joomla and hosting the video with amazon S3. The only issue is that once someone has signed up to view or download the video if they have any web development experience they will be able to get the url of the video and freely publish it on the web. How can this be prevented? It looks like it can be done using IAM User Temporary Credentials - AWS SDK for PHP but the client would prefer not to have to pay someone to spend hours writing custom php code to get this to work. With Amazon s3 I could at least check the log files I guess to manually monitor the url but is there a way to track the url with Google Analytics? or is there a more elegant solution?

    Read the article

< Previous Page | 602 603 604 605 606 607 608 609 610 611 612 613  | Next Page >