Search Results

Search found 24387 results on 976 pages for 'ssh client'.

Page 504/976 | < Previous Page | 500 501 502 503 504 505 506 507 508 509 510 511  | Next Page >

  • Can't bring NAT to work

    - by user31738
    Hello, I bought a D-link DIR-300 wireless router and i can't bring NAT to work, i have an ssh and http service i need to forward to the internet. My connection is as follows: I have an ADSL connection, i'm using a ADSL ethernet modem connected and working, it doesnt let me put it on bridge mode. I have my router connected to my adsl modem through ethernet, it gets its ip through DHCP (and i'ts always the same) I have a desktop computer running linux with apache and openssh configured and working, it has fixed ip. I configured the NAT in the modem forwarding port 22 from the router ip to the internet. In the router i setup NAT forwarding port 22 from the desktop computer fixed ip to out there. This setup already worked with a fonera i had before, can anyone help me with this or tell me what kind of tests do i need to do? How can i test if the router is forwarding ports correctly before the modem?

    Read the article

  • VM can't connect to outside in bridged mode

    - by Kamal
    Hi Guys, I am not able to ping any machine(not even the host) from Guest VM in bridged mode. But I got an IP which is on the same subnet as host. I can ping my guest VM from the host and can use ssh to connect to the guest. I am using Vmware workstation 6.5. Guest VM is a centos VM and host is windows xp. Every thing works fine in NAT mode. Any clues as to what could be happening. I tried disabling all the firewalls I have.

    Read the article

  • Bootable and remote controllable imaging software?

    - by stefan.at.wpf
    I am looking for an imaging software, that one can boot from CD or usb stick (therefore without installing anything on the system!) and then control by some kind of remote desktop. So something like a combination of Acronis TrueImage's Bootable Media and Windows Remote Desktop. I don't even need to be able to store the backup on a network drive, as an external usb hdd is attached for these purposes. Background: A system that is normally controlled by SSH / command line and therefore has no keyboard/mouse connected that I could use to control Acronis TrueImage.

    Read the article

  • Code deploy system [closed]

    - by Turnaev Evgeny
    Currently we deploy code to servers in a various ways: freebsd package freebsd ports part of config files and static just svn up'ed and a symlink is changed to new upped folder The distribution of freebsd packages to target servers is done through custom tool that uses ssh. I am looking for a code deploy system that will allow: deploy several packages (freebsd or linux) atomic (ether deploy all or none of them to server) can save a history of last stable version - so in case of bad deploy i can easily rollback to last working version all servers ease deployment of config and static files - and integrate those into atomic deploy/rollback system. should work with freebsd or linux (apt-get system)

    Read the article

  • Spree customize/extend user roles and permissions

    - by swapnil
    I am trying to specify some custom roles in Spree for example role 'client' and extend the permissions to access the admin section for this role. This user will be able to access only those Product created by that user. Concept is letting a user with role 'client' manage only products and other certain Models. To start with I added CanCan plugin and defined a RoleAbility Class in role_ability.rb Just following this post : Spree Custom Roles Permissions class RoleAbility include CanCan::Ability def initialize(user) user ||= User.new if user.has_role? 'admin' can :manage, :all elsif user.has_role? 'client_admin' can :read, Product can :admin, Product end end end Added this to an initializer : config/initializers/spree.rb Ability.register_ability(RetailerAbility) Also extended admin_products_controller_decorator.rb :app/controllersadmin_products_controller_decorator.rb Admin::ProductsController.class_eval do def authorize_admin authorize! :admin, Product authorize! params[:action].to_sym, Product end end But I am getting flash message 'Authorisation Failure' Trying to find some luck, I referred following links A github gist for Customizing Spree Roles : https://gist.github.com/1277326 Here's a similar issue what I am facing : http://groups.google.com/group/spree-user/browse_thread/thread/1e819e10410d03c5/23b269e09c7ed47e All efforts in vain... Any pointers of what is going on here highly appreciated ? Thanks in advance.

    Read the article

  • scp vs netatalk, samba, and/or vsftpd with External USB drive

    - by KitsuneYMG
    I set up a ubuntu server machine to share an ext2 formatted external usb drive. When attempting to copy a single 275MB files from said device through netatalk, I get estimated download rates at around 45 min. With samba and ftp (using vsftpd) I get 1+ hours! Using scp to copy the file results in complete download within 5 minutes. Another option, ssh+cp from external device to ~ and then using netatalk to grab it from there results in a total time of arounf 7 minutes. Does anyone have a clue what is misconfigured? Assuming that nothing is, is there any fs/pseudo-fs that would use the internal hdd as an intermediate location/onion-layer for the external hdd (for reads only)? Details: AppleVolumes.default: /mnt/ext USB allow:username cnidscheme:cdb options:usedots,upriv

    Read the article

  • Unable to Get a Correct Time when I am Calling serverTime using jquery.countdown.js + Asp.net ?

    - by user312891
    When i am calling the below function I unable to get a correct Answer. Both var Shortly and newTime having same time one coming from the client site other sync with server. http://keith-wood.name/countdown.html I am waiting from your response. Thanks $(function() { var shortly = new Date('April 9, 2010 20:38:10'); var newTime = new Date('April 9, 2010 20:38:10'); //for loop divid /// $('#defaultCountdown').countdown({ until: shortly, onExpiry: liftOff, onTick: watchCountdown, serverSync: serverTime }); $('#div1').countdown({ until: newTime }); }); function serverTime() { var time = null; $.ajax({ type: "POST", //Page Name (in which the method should be called) and method name url: "Default.aspx/GetTime", // If you want to pass parameter or data to server side function you can try line contentType: "application/json; charset=utf-8", dataType: "json", data: "{}", async: false, //else If you don't want to pass any value to server side function leave the data to blank line below //data: "{}", success: function(msg) { //Got the response from server and render to the client time = new Date(msg.d); alert(time); }, error: function(msg) { time = new Date(); alert('1'); } }); return time; }

    Read the article

  • Systems design question: DB connection management in load-balanced n-tier

    - by aoven
    I'm wondering about the best approach to designing a DB connection manager for a load-balanced n-tier system. Classic n-tier looks like this: Client -> BusinessServer -> DBServer A load-balancing solution as I see it would then look like this: +--> ... +--+ +--> BusinessServer +--+--> SessionServer --+ Client -> Gateway --+--> BusinessServer +--| +--> DBServer +--> BusinessServer +--+--------------------+ +--> ... +--+ As pictured, the business server component is being load-balanced via multiple instances, and a hardware gateway is distributing the load among them. Session server probably needs to be situated outside the load-balancing array, because it manages state, which mustn't be duplicated. Barring any major errors in design so far, what is the best way to implement DB connection management? I've come up with a couple of options, but there may be others I'm not aware of: Introduce a new Broker component between the DBServer and the other components and let it handle the DB connections. The upside is that all the connections can be managed from a single point, which is very convenient. The downside is that now there is an additional "single point of failure" in the system. Other components must go through it for every request that involves DB in some way, which also makes this a bottleneck. Move the DB connection management into BusinessServer and SessionServer components and let each handle its own DB connections. The upside is that there is no additional "single point of failure" or bottleneck components. The downside is that there is also no control over possible conflicts and deadlocks apart from what DBServer itself can provide. What else can be done? FWIW: Technology is .NET, but none of the vendor-specific stacks are used (e.g. no WCF, MSMQ or the like).

    Read the article

  • Trouble Connecting to Virtual Machine after IP address Change

    - by David
    I have a VMware image running a copy of Fedora 11 which is hosted on a remote server. The remote server recently had its IP address change. I'm now unable to connect to my virtual machine. The server admin assures me that my virtual machine is running and assigned the new IP address. I have checked the firewalls and had the remote admin restart the VM instance. Neither of these fixed the problem. How do I troubleshoot a remote server which I am unable to SSH to? I'm actually even unable to ping the remote IP (connection timed out).

    Read the article

  • POST variables to web server?

    - by OverTheRainbow
    Hello I've been trying several things from Google to POST data to a web server, but none of them work: I'm still stuck at how to convert the variables into the request, considering that the second variable is an SQL query so it has spaces. Does someone know the correct way to use a WebClient to POST data? I'd rather use WebClient because it requires less code than HttpWebRequest/HttpWebResponse. Here's what I tried so far: Private Sub Button1_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button1.Click Dim wc = New WebClient() 'convert data wc.Headers.Add("Content-Type", "application/x-www-form-urlencoded") Dim postData = String.Format("db={0}&query={1}", _ HttpUtility.UrlEncode("books.sqlite"), _ HttpUtility.UrlEncode("SELECT id,title FROM boooks")) 'Dim bytArguments As Byte() = Encoding.ASCII.GetBytes("db=books.sqlite|query=SELECT * FROM books") 'POST query Dim bytRetData As Byte() = wc.UploadData("http://localhost:9999/get", "POST", postData) RichTextBox1.Text = Encoding.ASCII.GetString(bytRetData) Exit Sub Dim client = New WebClient() Dim nv As New Collection nv.Add("db", "books.sqlite") nv.Add("query", "SELECT id,title FROM books") Dim address As New Uri("http://localhost:9999/get") 'Dim bytRetData As Byte() = client.UploadValues(address, "POST", nv) RichTextBox1.Text = Encoding.ASCII.GetString(bytRetData) Exit Sub 'Dim wc As New WebClient() 'convert data wc.Headers.Add("Content-Type", "application/x-www-form-urlencoded") Dim bytArguments As Byte() = Encoding.ASCII.GetBytes("db=books.sqlite|query=SELECT * FROM books") 'POST query 'Dim bytRetData As Byte() = wc.UploadData("http://localhost:9999/get", "POST", bytArguments) RichTextBox1.Text = Encoding.ASCII.GetString(bytRetData) Exit Sub End Sub Thank you.

    Read the article

  • Tomato vs X-Wrt Wireless Router Firmware - which is better?

    - by wag2639
    A few years ago, I've switched over from DD-WRT to Tomato and I haven't looked back since. Before I did, I poked around with OpenWRT but found it too confusing or annoying to use (and I'm a CS major and setup and configured Linux servers using SSH). I'm probably not going back to DD-WRT because of all the controversy but I was wondering how X-Wrt is nowadays? From the screenshots, it looks a lot more featured packed than Tomato and that definitely has its appeal. Then again, simplicity has its advantages to. Any thoughts?

    Read the article

  • opennebula 3.4 in debian squeeze

    - by Jin Splif
    hope can get some advise n help.... currently I am installing opennebula 3.4 in debian squeeze everything have being successful where I am able to access the opennebula sunstone webpage localhost:9869 , use one command but when I tried to create a host the status become error... hope someone can assist me on this thanks sample log Monitoring host abc (0) [InM][I]: Command execution fail: 'if [ -x "/var/tmp/one/im/run_probes" ]; then /var/tmp/one/im/run_probes kvm 0 abc; else exit 42; fi' [InM][I]: ssh: Could not resolve hostname abc: Name or service not known [InM][I]: ExitCode: 255 [InM][E]: Error monitoring host 0 : MONITOR FAILURE 0 -

    Read the article

  • How to send parameters with same encoding from javascript?

    - by nimcap
    I have a javascript file that lots of people have embedded to their pages. Since I am hosting the file, I have control over that javascript file; I cannot control the way it is embedded because lots of people is using it already. This javascript file sends GET requests to my servlets, and the parameters passed with the request are recorded to DB. For example, javascript sends a request to http://myserver.com/servlet?p1=123&p2=aString and then servlet records 123 and aString to DB somehow. Before sending strings I use encodeURIComponent() to encode it. But what I figured out is every client sends the same string with different encodings depending on either their browser or the site they are visiting. As a result, same strings are represented with different characters when it reaches servlet (so they are different strings). What I am trying to do is to convert the strings to one kind of encoding from javascript so when they reach the client same words are represented with same characters. How is this possible? PS. If there is a way to convert the encoding from Java it is also applicable.

    Read the article

  • PEAR mail not sending to .eu email addresses

    - by andy-score
    I have a PEAR mailing script that is used to send newsletters from a clients website. I've used the same code before to produce another newsletter system and it has worked well and been used to send emails to various addresses, however our latest client has email addresses ending .eu and this seems to cause a problem. When the newsletter is sent from the site to the various subscribers, including gmail, hotmail, yahoo and our own company emails, the emails are received correctly by all but the clients email addresses, the ones ending in .eu. As there is nothing different between their mailing system and our own, which is run from the same hosting company, I have to conclude that it is something to do with the domain name. The emails are being sent to the addresses from the system, as I have a log file storing the email addresses when the mail out function is called, but the newsletter never appears in the inbox. I have created a new email account for the domain and that too isn't receiving the emails. It's not going into a spam folder as the webmail system marks spam by adding SPAM into the subject. I've tried to log if there are any errors using the following foreach($subscribers as $recipient) { $send_newsletter = $mail->send($recipient, $headers, $body); // LOG INFO $message = $recipient; if($send_newsletter) { $message .= ' SENT'; } elseif(PEAR::isError($send_newsletter)) { $message .= ' ERROR: '.$send_newsletter->getMessage(); } $message .= ' | '; fwrite($log_file,$message); } However this simple returns SENT for all recipients, so in theory there isn't anything wrong with the mailing function. I don't know a great deal about PEAR or the mailing function so I may be missing something important, but I'd have thought seeing the last thing to happen is sending the email out, and that seems to work, then it should reach the clients inbox. Is this something to do with the PEAR mailing function not liking .eu addresses or is it more likely to be something wrong in my code or with their domain? Any help is greatly appreciated as the client and myself are getting both confused and frustrated by the whole thing. Cheers

    Read the article

  • Associate "Code/Properties/Stuff" with Fields in C# without reflection. I am too indoctrinated by J

    - by AlexH
    I am building a library to automatically create forms for Objects in the project that I am working on. The codebase is in C#, and essentially we have a HUGE number of different objects to store information about different things. If I send these objects to the client side as JSON, it is easy enough to programatically inspect them to generate a form for all of the properties. The problem is that I want to be able to create a simple way of enforcing permissions and doing validation on the client side. It needs to be done on a field by field level. In javascript I would do this by creating a parallel object structure, which had some sort of { permissions : "someLevel", validator : someFunction } object at the nodes. With empty nodes implying free permissions and universal validation. This would let me simply iterate over the new object and the permissions object, run the check, and deal with the result. Because I am overfamilar with the hammer that is javascript, this is really the only way that I can see to deal with this problem. My first implementation thus uses reflection to let me treat objects as dictionaries, that can be programatically iterated over, and then I just have dictionaries of dictionaries of PermissionRule objects which can be compared with. Very javascripty. Very awkward. Is there some better way that I can do this? Essentially a way to associate a data set with each property, and then iterate over those properties. Or else am I Doing It Wrong?

    Read the article

  • CentOS send mail with external SMTP server and without local daemons

    - by Vilx-
    I've got a little old server with CentOS 6.5 on it. The hardware is old and crappy, but enough for what it has to do. Which consists of SSH (+SFTP), Apache, PHP and MySQL. Still, I'm trying to cut away all that I can. One thing that it does not need to do is to be an SMTP server. There are no mailboxes on it and nobody will ever route mail through it. However I do want it to send me an email when something goes wrong. Also, the webpages will send emails from PHP. So that brings me to the question - can I set up the mail system in such a way that there isn't an expensive mailer daemon sitting in the background with queues and whatnotelse, but rather every email is directly and immediately delivered to an external SMTP server? And how do I go about it?

    Read the article

  • Multi-module maven build : different result from parent and from module

    - by Albaku
    I am migrating an application from ant build to maven 3 build. This app is composed by : A parent project specifying all the modules to build A project generating classes with jaxb and building a jar with them A project building an ejb project 3 projects building war modules 1 project building an ear Here is an extract from my parent pom : <groupId>com.test</groupId> <artifactId>P</artifactId> <packaging>pom</packaging> <version>04.01.00</version> <modules> <module>../PValidationJaxb</module> <-- jar <module>../PValidation</module> <-- ejb <module>../PImport</module> <-- war <module>../PTerminal</module> <-- war <module>../PWebService</module> <-- war <module>../PEAR</module> <-- ear </modules> I have several problems which I think have the same origin, probably a dependency management issue that I cannot figure out : The generated modules are different depending on if I build from the parent pom or a single module. Typically if I build PImport only, the generated war is similar to what I had with my ant build and if I build from the parent pom, my war took 20MB, a lot of dependencies from other modules had been added. Both wars are running well. My project PWebService has unit tests to be executed during the build. It is using mock-ejb which has cglib as dependency. Having a problem of ClassNotFound with this one, I had to exclude it and add a dependency to cglib-nodep (see last pom extract). If I then build only this module, it is working well. But if I build from the parent project, it fails because other dependencies in other modules also had an implicit dependency on cglib. I had to exclude it in every modules pom and add the dependency to cglib-nodep everywhere to make it run. Do I miss something important in my configuration ? The PValidation pom extract : It is creating a jar containing an ejb with interfaces generated by xdoclet, as well as a client jar. <parent> <groupId>com.test</groupId> <artifactId>P</artifactId> <version>04.01.00</version> </parent> <artifactId>P-validation</artifactId> <packaging>ejb</packaging> <dependencies> <dependency> <groupId>com.test</groupId> <artifactId>P-jaxb</artifactId> <version>${project.version}</version> </dependency> <dependency> <groupId>org.hibernate</groupId> <artifactId>hibernate</artifactId> <version>3.2.5.ga</version> <exclusions> <exclusion> <groupId>cglib</groupId> <artifactId>cglib</artifactId> </exclusion> </exclusions> </dependency> <dependency> <groupId>cglib</groupId> <artifactId>cglib-nodep</artifactId> <version>2.2.2</version> </dependency> ... [other libs] ... </dependencies> <build> ... <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-ejb-plugin</artifactId> <configuration> <ejbVersion>2.0</ejbVersion> <generateClient>true</generateClient> </configuration> </plugin> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>xdoclet-maven-plugin</artifactId> ... The PImport pom extract : It depends on both Jaxb generated jar and the ejb client jar. <parent> <groupId>com.test</groupId> <artifactId>P</artifactId> <version>04.01.00</version> </parent> <artifactId>P-import</artifactId> <packaging>war</packaging> <dependencies> <dependency> <groupId>com.test</groupId> <artifactId>P-jaxb</artifactId> <version>${project.version}</version> </dependency> <dependency> <groupId>com.test</groupId> <artifactId>P-validation</artifactId> <version>${project.version}</version> <type>ejb-client</type> </dependency> ... [other libs] ... </dependencies> The PWebService pom extract : <parent> <groupId>com.test</groupId> <artifactId>P</artifactId> <version>04.01.00</version> </parent> <artifactId>P-webservice</artifactId> <packaging>war</packaging> <properties> <jersey.version>1.14</jersey.version> </properties> <dependencies> <dependency> <groupId>com.sun.jersey</groupId> <artifactId>jersey-servlet</artifactId> <version>${jersey.version}</version> </dependency> <dependency> <groupId>com.rte.etso</groupId> <artifactId>etso-validation</artifactId> <version>${project.version}</version> <type>ejb-client</type> </dependency> ... [other libs] ... <dependency> <groupId>org.mockejb</groupId> <artifactId>mockejb</artifactId> <version>0.6-beta2</version> <scope>test</scope> <exclusions> <exclusion> <groupId>cglib</groupId> <artifactId>cglib-full</artifactId> </exclusion> </exclusions> </dependency> <dependency> <groupId>cglib</groupId> <artifactId>cglib-nodep</artifactId> <version>2.2.2</version> <scope>test</scope> </dependency> </dependencies> Many thanks

    Read the article

  • Not able to login to new AMI on EC2 - moveing from micro to small instance

    - by zengr
    I had a t1.micro instance (old_server) on Linux on EC2 and now I need to upgrade my server to m1.small Linux (new_server). So, here is what I did: Shutdown old_server and create an AMI. Launch the new AMI with m1.small configuration. (I kept the key and security group same as old_server) I tried to login by: ssh -i my_key.pem [email protected] But, it gives a connection timeout error. My login to old_server works fine. So, my question is: What is the correct way to scale up (vertically) an EC2 instance? Where am I going wrong in the above mentioned step? When I create an AMI in step 1, is the EBS (data) also copied.

    Read the article

  • Cisco 877 as PPPoA/PPPoE bridge (no routing) - how to make it listen to IP for management?

    - by Ingmar Hupp
    I have a Cisco 877 configured to bridge ADSL with PPPoA to PPPoE on Vlan1. This works fine, but in this mode the only way I can configure the Cisco is via the serial console. I'd like to have the Cisco also listen on an IP address so I can telnet/ssh into it. I think the right way to go about this would be via bridge irb, but I'm not sure exactly how (or if that's even the right direction). IOS is 12.4T and my current config (cut down to essentials) is: no ip routing no ip cef ! ! interface ATM0 no ip address no ip route-cache no atm ilmi-keepalive pvc 0/38 encapsulation aal5snap ! dsl operating-mode auto bridge-group 1 ! ! interface Vlan1 no ip address no ip route-cache bridge-group 1 Just setting an IP address on Vlan1 didn't have the desired effect, but surely this must be possible somehow (the Draytek Vigor 120 even does it by default).

    Read the article

  • Does ILOM on recent Sun rackmount servers fully support Linux?

    - by orange80
    Do the recent Sun rackmount servers with ILOM fully support installation of Linux (Ubuntu or Fedora maybe) via the ILOM (connected by ssh) without having to hook up a display, kbd, and mouse? I have an old Sun v20z right now that will install Solaris no problem over the Service Processor but when trying to install Ubuntu 9 64-bit server I get one line on the console then it goes blank. I'd be interested in know which if any of the recent x64 models would allow me to install and run linux while completely avoiding any need for external display, keyboard, or mouse. Thanks!

    Read the article

  • How to disable VGA on new server... reversibly?

    - by Javawag
    Strange question here. I have a server with 1GB of RAM, however when booted this shows as 768MB. I've discovered the reason for it – and that is that it has an unboard graphics card which shares memory with main RAM. Running Ubuntu Server, it doesn't actually ever use anything graphical – it's all set up to be SSH'd into and therefore there's no need to use the VGA. I believe there may be a setting to switch off VGA/graphics card in BIOS, but my question to you guys is: is this recommended? and if I turn it off, then how would I turn it back on again in BIOS (given that with it off, I wouldn't be able to see the option to turn it on as there would be no graphics output!!)?

    Read the article

  • Android software for the system administrator on the move

    - by GruffTech
    My company has over service through Verizon, and AT&T Service in the area is "shoddy" at its best, so I haven't been able to join the "iPhone party" like so many of my fellow system administrators have been able to. That being said, this week finally a phone I like has hit Verizon, the HTC Incredible. (I've been waiting for the Desire or Nexus One, but after seeing spec sheets and reviews, HTC Incredible comes out ahead anyway). So (finally) I'm looking for Android Apps that are "gotta-haves" for system administrators. I've found the bottom three. If there are others you prefer over these let me know. RDP Program - RemoteRDP SSH Client - ConnectBot Nagios - NagMonDroid Reply with your favorite Android App and why!

    Read the article

  • What to do when ServerSocket throws IOException and keeping server running

    - by s5804
    Basically I want to create a rock solid server. while (keepRunning.get()) { try { Socket clientSocket = serverSocket.accept(); ... spawn a new thread to handle the client ... } catch (IOException e) { e.printStackTrace(); // NOW WHAT? } } In the IOException block, what to do? Is the Server socket at fault so it need to be recreated? For example wait a few seconds and then serverSocket = ServerSocketFactory.getDefault().createServerSocket(MY_PORT); However if the server socket is still OK, then it is a pity to close it and kill all previously accepted connections that are still communicating. EDIT: After some answers, here my attempt to deal with the IOException. Would the implementation be guaranteeing keeping the server up and only re-create server socket when only necessary? while (keepRunning.get()) { try { Socket clientSocket = serverSocket.accept(); ... spawn a new thread to handle the client ... bindExceptionCounter = 0; } catch (IOException e) { e.printStackTrace(); recreateServerSocket(); } } private void recreateServerSocket() { while (keepRunning) { try { logger.info("Try to re-create Server Socket"); ServerSocket socket = ServerSocketFactory.getDefault().createServerSocket(RateTableServer.RATE_EVENT_SERVER_PORT); // No exception thrown, then use the new socket. serverSocket = socket; break; } catch (BindException e) { logger.info("BindException indicates that the server socket is still good.", e); bindExceptionCounter++; if (bindExceptionCounter < 5) { break; } } catch (IOException e) { logger.warn("Problem to re-create Server Socket", e); e.printStackTrace(); try { Thread.sleep(30000); } catch (InterruptedException ie) { logger.warn(ie); } } } }

    Read the article

  • Create an AWS AMI for Ubuntu with GUI which automatically launches web browser

    - by Rory MacDonald
    I've got an ubuntu AMI setup with ubuntu desktop installed and Chrome installed and set to boot on load (via the startup programmes menu within the ubuntu desktop) I've created an image of this AMI, but any time I launch a new instance running this, the Ubuntu GUI doesn't seem to load, until I SSH into the machine, enable VNC and then connect via Chicken VNC to the machine. At that point, the desktop appears to load + starts the browser. I really need the machine to boot and the browser to load without having to VNC into the machine.. Any help would be appreciated.

    Read the article

  • Why Android for enterprise applications?

    - by mcabral
    Recently one of our clients is considering the posibility of picking up an old WinMobile 5.0 project. Several features are to be added to the point it will be a major version update. The client is worried about the mobile market, and thinks there's a chance all the effort put in this development will have to be thrown away in a couple of year due to the dinamics of the mobile market and the deprecation of mobile devices. So, the client is not sure whether he should continue with Windows Mobile (changing from WM 5.0 to 6.X) or starting from scratch with another technology. From our part we have been studing the mobile market, looking for clues for which will be the winning horse. The safe move seems to continue with WM just because re writing an entire application from scratch involves more risks and delays. On the other hand WM seems to be losing market and the ghost of an exit on their part is growing stronger everyday. But what can be say about Android? Everyone is talking about it and is growing at full speed but what avantagies will it bring to the table? Why should we start a fresh applicaction on this technology? So the question remains the same.. is Andriod mature enough for an enterprise application? Will you recomend it to one of your clients? Will you port/rewrite a WM application to Andriod? What's the trade-off? EDIT: Addressing commentaries. The app is entirely built with C# and Compact Framework. The app is for logistics/management.

    Read the article

< Previous Page | 500 501 502 503 504 505 506 507 508 509 510 511  | Next Page >