Search Results

Search found 17188 results on 688 pages for 'browser plugins'.

Page 476/688 | < Previous Page | 472 473 474 475 476 477 478 479 480 481 482 483  | Next Page >

  • Apache virtual hosts - Resources on website not loaded when accessed from other hostname than localhost

    - by Christian Stadegaart
    Running virtual hosts on Mac OS X 10.6.8 running Apache 2.2.22. /etc/hosts is as follows: 127.0.0.1 localhost 3dweergave studio-12.fritz.box 255.255.255.255 broadcasthost ::1 localhost fe80::1%lo0 localhost Virtual hosts configuration: NameVirtualHost *:80 <VirtualHost *:80> DocumentRoot "/opt/local/www/3dweergave" ServerName 3dweergave ErrorLog "logs/3dweergave-error_log" CustomLog "logs/3dweergave-access_log" common <Directory "/opt/local/www/3dweergave"> Options Indexes FollowSymLinks AllowOverride All Order allow,deny Allow from all </Directory> </VirtualHost> <VirtualHost *:80> ServerName main </VirtualHost> This will output the following settings: *:80 is a NameVirtualHost default server 3dweergave (/opt/local/apache2/conf/extra/httpd-vhosts.conf:21) port 80 namevhost 3dweergave (/opt/local/apache2/conf/extra/httpd-vhosts.conf:21) port 80 namevhost main (/opt/local/apache2/conf/extra/httpd-vhosts.conf:34) I made 3dweergave the default server by putting it first in the list. This will cause all undefined virtual hosts' names to load 3dweergave, and thus http://localhost will point to 3dweergave. Of course, normally, the first in the list is the virtual host main and localhost will point to main, but for testing purposes I switched them. When I navigate to http://localhost, my CakePHP default homepage shows as expected: Screenshot 1 But when I navigate to http://3dweergave, my CakePHP default homepage doesn't show as expected. It looks like every relative link to resources are not accepted by the server: Screenshot 2 For example, the CSS isn't loaded. When I open the source and click on the link, it opens the CSS file in the browser without errors. But when I run FireBug while loading the webpage, it seems that the CSS file isn't retrieved. (<link rel="stylesheet" type="text/css" href="/css/cake.generic.css" />) How can I fix this unwanted behaviour?

    Read the article

  • Mobile Web Applications – A guide for professional development

    - by JuergenKress
    (Tobias Bosch, Stefan Scheidt, Torsten Winterberg / Opitz Consulting Deutschland GmbH). There is a real hype around mobile solutions. Smartphones and tablets are everywhere. Frontend architecture is changing quickly to adopt cross browser technologies like HTML5 and extensive JavaScript-based development. In this book we introduce our software development process to build test-driven Single-Page JavaScript Web Applications, which will be the future next to native apps. We start with a short introduction of our RYLC showcase (know from our SOA articles), give a very short introduction to JavaScript, then talk about jQuery Mobile, Angular JS, Testing, Backend-communication and we close with deploying our RYLC-Webapp as a hybrid app using the PhoneGap (Cordova) framework. Don’t expect too much theory – it’s a practical guide explaining how RYLC Web App was built, to kickstart your own development. Currently only available in German as paperback and eBook. WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. BlogTwitterLinkedInMixForumWiki Technorati Tags: adf mobil

    Read the article

  • Cannot Enter Repro Admin Web Interface at Port 5080

    - by aqua
    I have followed the instructions on this website www.rtcquickstart.org to set up my firewall, DNS settings, TLS, and have installed the TURN server and repro proxy as instructed, and have restarted repro. However, I am not able to access the web interface of repro on port 5080, either at localhost:5080 / 127.0.0.1:5080 or at the server's IP address: IPADDRESS:5080 (I have set the server's IP for binding in repro.config). I get the browser error message: 'Unable to connect to server' whenever trying to connect to the web interface via port 5080. I initially had Apache2 installed, which loaded pages correctly at port 80 / address root, and when checked it 'listened' at port 5080 after it was configured in /etc/apache2/ports.conf, however the repro web interface still did not work at port 5080. I have tried uninstalling Apache2 in case that was conflicting with repro's web server, but the problem persists, and testing port 5080 now shows that nothing is 'listening' on port 5080. I have tried reinstalling / purging repro but it has not helped. My router is correctly set to allow all ports; port 5080 is open and forwarding correctly. I can connect to the internet and ping all websites through the server and everything else is working correctly. I would be gateful if anyone could offer advice on how to solve this problem.

    Read the article

  • Multiplayer online game engine/pipeline

    - by Slav
    I am implementing online multiplayer game where client must be written in AS3 (Flash) to embed game into browser and server in C++ (abstract part of which is already written and used with other games). Networking models may differ from each other, but currently I'm looking toward game's logic run on both client and server parts but they're written on different languages while it's not the main problem. My previous game (pretty big one - was implemented with efforts of ~5 programmers in 1.5 years) was mainly "written" within electronic tables as structured objects with implemented inheritance: was written standalone tool which generated AS3 and C++ (languages of platforms to which the game was published) using specified electronic tables file (.xls or .ods). That file contained ~50 tables with ~50 rows and ~50 columns each and was mainly written by game designers which do not know any programming languages. But that game was single-player. Having declared problem with my currently implementing MMO, I'm looking toward some vast pipeline, where will be resolved such problems like: game objects descriptions (which starships exist within game, how much HP they have, how fast move, what damage deal...) actions descriptions (what players or NPCs can do: attack each other, collect resources, build structures, move, teleport, cast spells) - actions are transmitted through server between clients influences (what happens when specified action applied on specified object, e.i "Ship A attacked Ship B: field "HP" of Ship B reduced by amount of field "damage" of Ship A" Influences can be much more difficult, yes, e.i. "damage is twice it's size when Ship has =5 allies around him in a 200 units range during night" and so on. If to be able to write such logic within some "design document" it will be easily possible to: let designers to do their job without programmer's intervention or any bug-prone programming validate described logic transfer (transform, convert) to any programming language where it will be executed Did somebody worked on something like that? Is there some tools/engines/pipelines which concernes with it? How to handle all of this problems simultaneously in a best way or do I properly imagine my tasks and problems to myself?

    Read the article

  • Unity calendar lens not showing events in Ubuntu 12.04

    - by David_G
    I'm trying to get proper/useful calendar integration into Ubuntu 12.04. I have a Google Calendar (& account) and I want to be able to use this without opening the browser. I want to get the Unity Calendar lens working, so that it shows events coming up, and it allows me a quick way to add new events. However, after installing it, it does not find any events, nor allow me to add a new event. Note that I've installed Lightning 1.4, Evolution mirror 0.2.3, Evolution, and unity-calendar lens. I've also installed Calendar-indicator. I suspect that somehow the lens is not getting the calendar information from thunderbird via evolution. A bit of searching around led me to try this command: /usr/lib/calendar-lens/calendar-lens-daemon.py. With this result: /usr/lib/python2.7/dist-packages/gobject/constants.py:24: Warning: g_boxed_type_register_static: assertion `g_type_from_name (name) == 0' failed import gobject._gobject Traceback (most recent call last): File "/usr/lib/calendar-lens/calendar-lens-daemon.py", line 324, in daemon = Daemon() File "/usr/lib/calendar-lens/calendar-lens-daemon.py", line 80, in init for calendar in evolution.ecal.list_calendars(): AttributeError: 'NoneType' object has no attribute 'list_calendars' Any ideas?

    Read the article

  • MvcReportViewer v.0.4.0 is available!

    - by Ilya Verbitskiy
    Originally posted on: http://geekswithblogs.net/ilich/archive/2014/06/04/mvcreportviewer-v.0.4.0-is-available.aspxToday I released new version of MvcReportViewer. This release contains mostly bug fixes reported by library users. I am glad to see that Open Source model works and people try to contribute to the project! Thank you everybody for your bug repots and help with the project. Version 0.4.0 Added support for ASP.NET MVC 5 Removed jQuery dependency. I have not tested it on IE8 or earlier versions. Any help with testing is welcome! Fixed problem with SSRS keep-alive cookies. Keep-alive cookies are issued every time a report is opened during a browser session. Many people don't restart their browsers and in my case, Chrome doesn't get rid of the cookie session data on close - had to manually delete them for the reports to start working again. I added KeepSessionAlive control settings to manage SSRS keep-alive behavior. It is set to false by default to fix Bad Request 400: Request Too Long issue. You can find usage example in Fluent.cshtml. Fixed the bug when ReportViewer Control parameters was not parsed when ShowParameterPrompts parameter had not been set. Changed public static MvcReportViewerIframe MvcReportViewer method to use IEnumerable<KeyValuePair<string, object>> reportParameters instead of simple object. The reason is users reported that they mostly use multiple report parameters’ values. Added support for SSRS hosted on Windows Azure. Users should set MvcReportViewer.IsAzureSSRS property to true in Web.config to use Windows Azure authentication. I do not have Windows Azure SSRS and build the code using http://msdn.microsoft.com/en-us/library/gg552871.aspx#Authentication article. It would be nice if somebody from community tested the change or provided me a test report on Windows Azure for testing purposes.

    Read the article

  • 12.04 WiFi issue on a particular access point

    - by user71706
    I have a WiFi access point that I connect to a PC to share its Internet connection with multiple machines, in a training environment. All the machines with 11.04 connect to this access point with no problem, and can access any server on the Internet. These machines have an Intel Wireless -N 1030 BGN chipset (as reported by lspci). Now, my problem is that I don't manage to connect 12.04 machines to this wireless network. The systems I tried do manage to connect (confirmed by Network Manager), but when I try to access a website like http://kernel.org, the browser shows "Connecting to kernel.org...", but displays a "The connection has timed out" error page. Other symptoms: Name resolution works (for example 'nslookup kernel.org') finds kernel.org's IP address 'ping kernel.org' doesn't work The same 12.04 machines have no problem at all with other wireless networks. So there is probably something weird in my access point (though the 11.04 machines are not impacted). Would you have any suggestions for investigating this issue? Thanks, Michael.

    Read the article

  • E: Sub-process /usr/bin/dpkg returned an error code (1)

    - by kss
    sudo apt-get install acroread, i got the following output Reading package lists... Done Building dependency tree Reading state information... Done Suggested packages: libldap2 libgnome-speech7 The following NEW packages will be installed: acroread 0 upgraded, 1 newly installed, 0 to remove and 6 not upgraded. 1 not fully installed or removed. Need to get 0 B/60.1 MB of archives. After this operation, 142 MB of additional disk space will be used. (Reading database ... 237901 files and directories currently installed.) Unpacking acroread (from .../acroread_9.5.1-1precise1_i386.deb) ... dpkg: error processing /var/cache/apt/archives/acroread_9.5.1-1precise1_i386.deb (--unpack): failed in write on buffer copy for backend dpkg-deb during `./opt/Adobe/Reader9/Browser/intellinux/nppdf.so': No space left on device No apport report written because MaxReports is reached already dpkg-deb: error: subprocess paste was killed by signal (Broken pipe) Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Processing triggers for desktop-file-utils ... Processing triggers for gnome-menus ... Processing triggers for man-db ... /usr/bin/mandb: can't write to /var/cache/man/1645: No space left on device Errors were encountered while processing: /var/cache/apt/archives/acroread_9.5.1-1precise1_i386.deb E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • Pasting from vim in terminal to Google Docs (Firefox + Vimperator) - need to understand

    - by LIttle Ancient Forest Kami
    I had some trouble with copy-pasting text from vim in terminal to Google Docs (aka Drive) document (hereafter GDd) in FF browser (with Vimperator). Note: I have a file opened in Vim 7.2 in terminal :version displays both +clipboard and +xterm-clipboard I'm on Ubuntu 10.04 LTS, so I don't think that's Unity-related I want to use Vim, not GVim, nor gedit... I'm avid fan of mouseless navigation, so solution with mouse was not what I wanted. I have the solution, but I need understanding. What I tried and where it gets me: Yanking whole file text via: ggvGy allows me to: paste it via mouse middle button, NOT with Ctrl+v or Shift+Insert here, in text area for entering question text in gedit but NOT in GDd where I want it pasted, even if I switch Vimperator to pass-through mode with Insert does NOT show in XClip after xclip -o From gedit, I can copy-paste the text into GDd (Vimperator's pass-through mode not required). :%! !xclip -i (or :first, last) reports whole file (all lines, to be precise) as filtered, though shell returns 1 `xclip -o' returns nothing (is empty) or returns previously copied value with 2. no surprise, but I can't paste at all not only to GDd but also to gedit or here setting clipboard (:set clipboard=unnamed) to unnamed doesn't help using "+y or "*y on whole file text actually does the trick So, the question (it's actually three, say "split" and I will): why middle mouse button pastes different things than Ctrl+v and how to know what will be pasted with each? why just yanking (without registers) works with mouse but not with keyboard / XClip? why didn't unnamed register help? After setting, it should make unnamed and * registers same?

    Read the article

  • What are some efficient ways to set up my environment when working on a remote site?

    - by Prefix
    Hello fellow Programmers, I am still a relatively new programmer and have recently gotten my first on-campus programming position. I am the sole dev responsible for 8 domains as well as 3 small sized PHP web apps. The campus has its web environment divided into staging and live servers -- we develop on the staging via SFTP and then push the updates to the live server through a web GUI. I use Sublime Text 2 and the Sublime SFTP plugin currently for all my dev work (its my preferred editor). If I am just making an edit to a page I'll open that individual file via the ftp browser. If I am working on the PHP web app projects, I have the app directory mapped to a local folder so that when I save locally the file is auto-uploaded through Sublime SFTP. I feel like this workflow is slow and sub-optimal. How can I improve my workflow for working with remote content? I'd love to set up a local environment on my machine as that would eliminate the constant SFTP upload/download, but as I said there are many sites and the space required for a local copy of the entire domain would be quite large and complex; not to mention keeping it updated with whatever the latest on the staging server is would be a nightmare. Anyone know how I can improve my general web dev workflow from what I've described? I'd really like to cut out constantly editing over FTP but I'm not sure where to start other than ripping the entire directory and dumping it into XAMP.

    Read the article

  • The Enterprise Side of JavaFX: Part Two

    - by Janice J. Heiss
    A new article, part of a three-part series, now up on the front page of otn/java, by Java Champion Adam Bien, titled “The Enterprise Side of JavaFX,” shows developers how to implement the LightView UI dashboard with JavaFX 2. Bien explains that “the RESTful back end of the LightView application comes with a rudimentary HTML page that is used to start/stop the monitoring service, set the snapshot interval, and activate/deactivate the GlassFish monitoring capabilities.”He explains that “the configuration view implemented in the org.lightview.view.Browser component is needed only to start or stop the monitoring process or set the monitoring interval.”Bien concludes his article with a general summary of the principles applied:“JavaFX encourages encapsulation without forcing you to build models for each visual component. With the availability of bindable properties, the boundary between the view and the model can be reduced to an expressive set of bindable properties. Wrapping JavaFX components with ordinary Java classes further reduces the complexity. Instead of dealing with low-level JavaFX mechanics all the time, you can build simple components and break down the complexity of the presentation logic into understandable pieces. CSS skinning further helps with the separation of the code that is needed for the implementation of the presentation logic and the visual appearance of the application on the screen. You can adjust significant portions of an application's look and feel directly in CSS files without touching the actual source code.”Check out the article here.

    Read the article

  • What are the requirements to test a website using jquery.get() ? [migrated]

    - by Frankie
    I am working on a simple website. It has to search quite a few text files in different sub-folders. The rest of the page uses jquery, so I would like to use it for this also. The function I am looking at is .get() for downloading the files. So my main question is, can I test this on my local computer (Ubuntu Linux) or do I have to have it uploaded to a server? Also, if there's a better way to go about this, that would be nice to know. However, I'm more worried about getting it working. Thanks, Frankie PS: Heres the JS/jQuery code for downloading the files to an array. g_lists = new Array(); $(":checkbox").each(function(i){ if ($(this).attr("name") != "0") { var path = "../" + $(this).attr("name") + ".txt"; $("#bot").append("<br />" + path); // debug $.get(path, function(data){ g_lists[i] = data; $("#bot").html(data); }); } else { g_lists[i] = ""; } }); Edit: Just a note about the path variable. I think it's correct, but I'm not 100% sure. I'm new to web development. Here's some examples it produces and the directory tree of the site. Maybe it will help, can't hurt. . +-- include ¦   +-- jquery.js ¦   +-- load.js +-- index.xhtml +-- style.css +-- txt    +-- Scripting_Tools    +-- Editors.txt    +-- Other.txt Examples of path: ../txt/Scripting_Tools/Editors.txt ../txt/Scripting_Tools/Other.txt Well I'm a new user, so I can't "answer" my own question, so I'll just post it here: After asking for help on a IRC chat channel specific to jQuery, I was told I could use this on a local host. To do this I installed Apache web server, and copied my site into it's directory. More information on setting it up can be found here: http://www.howtoforge.com/ubuntu_debian_lamp_server Then to run the site I navigated my browser to "localhost" and everything works.

    Read the article

  • IE8 HTTPs Download Issue

    - by Jon Egerton
    I have a problem with a system I develop related to IE8 downloading over SSL (ie on sites using https://...) and is described on this MS kb article: http://support.microsoft.com/kb/323308 We use the HTTPCacheability.NoCache option as the data being downloaded is sensitive, and is downloaded from a secured site. I don't want that data to be cached on any of the proxies etc that the response passes through back to the client. The article describing the issue details a fix to the client side registry changing a BypassSSLNoCacheCheck setting. I don't want to loosen the system security just for IE8, as the system works fine on anything more upto date. Getting all the clients to apply the hotfix is difficult at best, and impossible at worst. We need to support IE8 in the system, at least for now. So: 1: Does the detailed hotfix have any implications for the security at the browser end in IE8 - does it mean the file will be cached? (in a place other than where the user saves the file). 2: Is there some way I can get these files downloadable with a change at the server end that doesn't break the security side of things?

    Read the article

  • How do you save/export changes made in Firebug?

    - by blunders
    Using Firebug to edit CSS, how do I save/export changes made to the CSS? TOOLS: Firefox, Firebug MAJOR UPDATE: If you know of a way to lock the forward/back/refresh on a FireFox tab, please let me know. Otherwise, I've given up on using FireBug/FireDiff as an IDE for CSS, it's nice, but lol... press backspace at the wrong time and ALL your work is gone... funny. So, really like the browser highlighting to CSS/HTML in Firebug. Know any good CSS editors that do this? Really had hope FireBug would work, but for now only see it as being good for ad-hoc inspection and test; meaning using it for what it's made for. UPDATES: @Lèse majesté: Just as an update, "Web Developer add-on" does let you edit CSS, but it does not let you edit/save CSS changes made by Firebug. Meaning you use Firebug to ID and maybe test changes, but it does not let you save the changes from Firebug. Here's a "how to" covering how to use them together: FF + FB + WD @Lèse majesté: Still playing around with FireDiff. It works okay, found one bug already (although I'm just working around it), and there's no "how to" I've been able to find, so I'm just trying every feature and clicking around... (for example, to export a diff you must be over the last item in the list, right click, and select as "Save Diff". The ".diff" is just a text file, no idea why at this point the ext is .diff.

    Read the article

  • How do I theme the Nautilus background image?

    - by Kesymaru
    I want to change the background image in the Nautilus file browser. My idea is to put my own style in the background. I'm using Ubuntu 11.10 and Nautilus is version 3. I know that I have to change the nautilus.css file of the theme, but the problem is that there is not a parameter for the background. I just want to apply an image but I can't find the file or parameter to change it. The CSS file is in the directory /home/UserName/.theme/MyTheme/gtk-3.0/apps. I've changed the nautilus.css file. I wrote two new lines using CSS style but I don't know where the correct place is to put it. The lines are: background-image: url("carbon.jpg"); background-repeat: repeat; Obviously I put the image called carbon.jpg in the same directory of nautilus.css, but this change doesn't work because I need to know whichs class displays the Nautilus file browsing frame. If I find this class I guess that this code will work. If someone knows how to do it, please tell me because I really want to make this change.

    Read the article

  • Internet Timeouts with TP-Link TL-WN821N v2 wireless usb stick

    - by user1622959
    A short time after accessing the internet, the browser/download times out. Before the timeout, the internet works OK briefly; afterwards, the wireless is still connected with a strong signal, but every internet access results in a timeout. When I leave the PC for a while, the internet is back just to timeout again as soon as I start using it. The same happens when I reconnect to the router. Also, when I surf the internet, it takes a couple of minutes until the timeout, but when I download something, it times out in a matter of seconds. The wireless adapter works just fine in Windows and internet via ethernet cable works just fine in Ubuntu. Does anyone have the same problem or knows a solution. I use Ubuntu 12.10 x64. The problem occurs since I installed ubuntu (which was a few days ago). Here some stuff that might be usefull: serus@serus-Ubuntu-PC:~$ lsusb Bus 002 Device 002: ID 0cf3:1002 Atheros Communications, Inc. TP-Link TL-WN821N v2 802.11n [Atheros AR9170] serus@serus-Ubuntu-PC:~$ lsmod Module Size Used by carl9170 82083 0 serus@serus-Ubuntu-PC:~$ modinfo carl9170 filename: /lib/modules/3.5.0-21- generic/kernel/drivers/net/wireless/ath/carl9170/carl9170.ko alias: arusb_lnx alias: ar9170usb firmware: carl9170-1.fw description: Atheros AR9170 802.11n USB wireless serus@serus-Ubuntu-PC:~$ iwconfig wlan0 IEEE 802.11bgn ESSID:"virginmedia0137463" Mode:Managed Frequency:2.462 GHz Access Point: A0:21:B7:F8:29:B6 Bit Rate=240 Mb/s Tx-Power=20 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off Link Quality=66/70 Signal level=-44 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:1399 Invalid misc:18 Missed beacon:0 serus@serus-Ubuntu-PC:~$ sudo lshw -C network *-network description: Wireless interface physical id: 1 bus info: usb@2:2 logical name: wlan0 serial: 00:27:19:bb:00:19 capabilities: ethernet physical wireless configuration: broadcast=yes driver=carl9170 driverversion=3.5.0-21-generic firmware=1.9.4 ip=192.168.0.6 link=yes multicast=yes wireless=IEEE 802.11bgn

    Read the article

  • Click buttons on the mouse stopped working in 12.10

    - by Kushal
    everything was great for a couple weeks after the upgrade, and then all of a sudden, the click buttons on my trackpad (as well as any other USB mouse I would hook up) stopped working. The pointer moves fine, but the clicks don't work. Sometimes the left click doesn't work but right click does, and then some times, neither works. I noticed this would begin when I would accidentally drag some text in a web browser (you know how when you try to move your pointer through the trackpad, but you accidentally tap down and it starts to drag whatever text you've selected on the window), and then you're done. The clicks won't work after that. They would work upon rebooting or logging off and back on, but then after a few minutes of usage, things would go back to being broken again. It happened a LOT when I was trying to play Scrabble on Facebook. I've raised a bug for this, but I haven't heard back anything on it. Here's the bug report: https://bugs.launchpad.net/bugs/1077805 Since the system was unusable this way, I had to remove it and install another OS based on 12.04. Has anyone else faced this issue or does someone know what to do to fix it? I'd go back to vanilla Ubuntu in a heartbeat if this issue can be fixed.

    Read the article

  • Newbie tips, please [closed]

    - by eXeP
    So, I just got a new computer and I want to put Ubuntu on my old laptop. I just need few tips before installing it. 1.Programs, where to download, how to download, what is the "ending" (windows has .exe) 2. How much is command line involved? And where to get the most usual commands? 3.Few programs you recommend (graphics editing, IDE, video player, web browser) 4. Do I have to download drivers when installing new OS? I plan on getting fully rid of Windows. I have no idea of the name of my graphics card, so how do I can get to know what it is if I have to download drivers? (I don't know the name because it's not on the original box, or anywhere on the internet, believe me) 5. When installing new OS does it destroy everything else on the hard drive? 6. Anti-virus, do I need one? I'm not super paranoid, and I don't visit "shady" sites. Please note that I have never used linux, or any other OS than Windows and sorry for my bad english. If this is the wrong place to post this, then please remove this. Thank you.

    Read the article

  • Web migration of a VB6 system with VWG

    - by Webgui
    Brinks Bolivia eSAC System (Customer Service) allows to register all different kinds of contacts for a customer; addition to maintaining an updated status of each service or customer request, to have accurate information and perform the appropriate procedures for all applications. The system was originally developed in VB6 and since web access was essential it was offered via Citrix. Since the application's performance was a critical issue as well as the need to offer the system without specific installations the company looked for a solution that would solve those drawbacks of using Citrix. Searching for a solution that would allow it to offer the eSAC system over the web without the need for specific client installations and provide sufficient performance levels even when there is limited bandwidth lead Brinks to a decision to migrate their VB6 Customer Service system to to Visual WebGui. "Developing on Visual WebGui we were able to migrate the system to web environment and even add new features in less time which allows us to offer it over a standard web browser with better performance and no installations as was required with Citrix," concluded Alexander Cuellar. The full article and screenshots of the system are available here.

    Read the article

  • Ubuntu-Java Installation-Topcoder-“Java not found”

    - by hakuna121
    I realized that this might be the dumbest question here, but being a total beginner as I am, I really couldn't figure it out after trying all kinds of instructions I could found on the web. Specs: Ubuntu 13.04; What I intended to do: check out the Algorithm Competition section, by clicking the second-to-left tab,located on the top-left of the page: http://community.topcoder.com/tc What I got: a pop-up saying Java not found! Java could not be automatically detected on your machine. This page will attempt to automatically install Java and Java Web Start. If the download and installation does not occur automatically, click the link below to go to the Sun website where you can download the latest version of Java. What I did: I followed instructions on this Ubuntu Documentation page: https://help.ubuntu.com/community/Java and installed OpenJDK(Java Runtime Environment/Browser plugin/SDK) through Ubuntu Software Center. Then I rebooted the system, tried the page again. But I still got the java not found pop-up described above. Question: What's missing the get this working? Thank you!

    Read the article

  • Huge spike in traffic tracked by Google Analytics from Safari browsers

    - by Petra Barus
    My site urbanindo.com recently experienced huge spike in traffic tracked by Google Analytics. The huge spike sometime shows in several same pages. This is odd because I rarely experienced that much traffic before. Some pages can have hundreds of visitors at the same time. But when I read the webserver log, those pages only showed up in only one or two entries, not hundreds like the GA showed. But the only similar thing about the entries is that they are using similar browser agent. Mozilla/5.0 (iPad; CPU OS 5_1_1 like Mac OS X) AppleWebKit/534.46 (KHTML, like Gecko) Version/5.1 Mobile/9B206 Safari/7534.48.3 Mozilla/5.0 (iPhone; CPU iPhone OS 5_1_1 like Mac OS X) AppleWebKit/534.46 (KHTML, like Gecko) Version/5.1 Mobile/9B206 Safari/7534.48.3 And when I opened the Google Analytics Audience Technology Browsers & OS and I plotted the chart based on the browsers, I saw that the huge spike came from Safari. This huge spikes started to happen since the beginning of this August, which happens to be when I use multiple webservers behind load balancer (although I'm pretty sure those two are not relevant). Is there something wrong with my GA configuration?

    Read the article

  • How do I change pages registering as 404 to 200

    - by christian
    I have this problem. After relaunching my site: http://www.kgstiles.com, traffic dropped immensely(about 60%). After troubleshooting for a week and a half - losing thousands of dollars off of lost traffic in the process, I found that Google was getting a 404 error at the end of many of my 301 redirects(so it wouldn't index the new pages). Most of of the pages, though, would register in my browser. They registered as a 404 error in Google's index as well as a 404checker. So my first question is: could this be what's causing my loss of traffic? and second: how do I fix it? I'm desperate! Any help is appreciated! # BEGIN s2Member GZIP exclusions <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteCond %{QUERY_STRING} (^|\?|&)s2member_file_download\=.+ RewriteRule .* - [E=no-gzip:1] </IfModule> # END s2Member GZIP exclusions # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> <IfModule mod_rewrite.c> RewriteEngine On RewriteRule ^moreinfo/(.*)$ http://www.kgstiles.com/moreinfo$1 [R=301] RewriteRule ^healthsolutions/(.*)$ http://www.kgstiles.com/healthsolutions$1 [R=301] RewriteRule ^(.*)\.html$ $1/ [R=301,L] RewriteRule ^(.*)\.htm$ $1/ [R=301,L] </IfModule> # END WordPress

    Read the article

  • Compressing/compacting messages over websocket on Node.js

    - by icelava
    We have a websocket implementation (Node.js/Sock.js) that exchanges data as JSON strings. As our use cases grow, so have the size of the data transmitted across the wire. The websocket protocol does not natively offer any compression feature, so in order to reduce the size of our messages we'd have to manually do something about the serialisation. There appear to be a variety of LZW implementations in Javascript, some which confuses me on their compatibility for in-browser use only versus transmission across the wire due to my lack of understanding on low-level encodings. More importantly, all of them seem to take a noticeable performance drag when Javascript is the engine doing the compression/decompression work, which is not desirable for mobile devices. Looking instead other forms of compact serialisation, MessagePack does not appear to have any active support in Javascript itself; BSON does not have any Javascript implementation; and an alternative BISON project that I tested does not deserialise everything back to their original values (large numbers), and it does not look like any further development will happen either. What are some other options others have explored for Node.js?

    Read the article

  • How much overhead is there in persistent connections?

    - by nynex
    Ok so I'm musing over a little side project I want to start. Essentially its a multi-session web based FTP client. Multi-session in that you can log into several FTP servers at the same time and perform operations like moving a file from one FTP server to another. I'm doing this mainly to brush up on the new webdev technologies, particularly websockets. I'm using node.js + socket.io to keep a persistent bi-directional connection between the web browser and the web server. The web server will also have persistent connections to each FTP server the user has logged into. So if there are 100 concurrent users each logged into 5 ftp accounts, the web server will have 100 websocket connections + 500 ftp connections. Is servicing 600 connections a lot? I know it depends on the hardware resources of the server but is something like this doable on a budget? Are there more efficient means of doing something like this? I know its unlikely that this project will really get popular but I want it to scale well regardless. Thanks for any help, I've still got a lot to learn.

    Read the article

  • How to fix the “Live INT automatically logs out”

    - by ybbest
    Problem: Live INT environment automatically logs out I am trying to setup the Authentication with Windows Live ID and followed this blog post ; I have a problem logging in to live INT web site. Whenever I try to log in (https://login.live-int.com/login.srf  this is the internal Live environment to be used in a dev. environment.), after entering valid email/password I get redirected to the logout page. I tried 2 different accounts (one with existing email address, and other one with newly created @hotmail-int.com address) and 3 different browsers so I’m sure that neither account nor the browser are the cause of this. I also tried to enter wrong password, and in that case I get the message that the password is wrong. Solution: All you need is the unique ID in order to add the user to SharePoint , you can get the ID without logging into the Live INT environment. I think the Live internal environment is not working correctly for some reasons , the reason I need to login to the Live internal environment is that I need to get the unique ID for the test account so that I can add the user to SharePoint. All the blogs I have come across require you to login in order to get the unique ID. However, I figured out another way of getting the unique ID without logging in. Steps are below: Register a new test account in the Live internal environment. Go to the SharePoint site collection that has  Live ID authentication enabled and select the LiveID INT(it will be different as you could name it differently when you set up the authentication provider) from the dropdown. Try login using the Internal Live account, you will get an Access Denied Error as below showing your  unique ID for the test account. Add that account to your SharePoint Group, boom, it works. I hope it will help anyone who needs to do this stuff in the future.

    Read the article

< Previous Page | 472 473 474 475 476 477 478 479 480 481 482 483  | Next Page >