Search Results

Search found 17852 results on 715 pages for 'load balancer'.

Page 414/715 | < Previous Page | 410 411 412 413 414 415 416 417 418 419 420 421  | Next Page >

  • Mitigating the 'firesheep' attack at the network layer?

    - by pobk
    What are the sysadmin's thoughts on mitigating the 'firesheep' attack for servers they manage? Firesheep is a new firefox extension that allows anyone who installs it to sidejack session it can discover. It does it's discovery by sniffing packets on the network and looking for session cookies from known sites. It is relatively easy to write plugins for the extension to listen for cookies from additional sites. From a systems/network perspective, we've discussed the possibility of encrypting the whole site, but this introduces additional load on servers and screws with site-indexing, assets and general performance. One option we've investigated is to use our firewalls to do SSL Offload, but as I mentioned earlier, this would require all of the site to be encrypted. What's the general thoughts on protecting against this attack vector? I've asked a similar question on StackOverflow, however, it would be interesting to see what the systems engineers thought.

    Read the article

  • putty ssh client become not responding frequently in windows 7

    - by Sankaran
    with celeron processor and 512 MB ram windows 7 takes too much time even to open a folder in windows explorer. I connect to internet with GPRS modem. When I tried to connect with remote linux machine with putty ssh client. The problem is the putty becomes Not responding in 2 or 3 minutes. I've to connect again and putty goes Not responding again. When i tried in Safe mode with networking, GPRS modem is not even detected. the modem is USB data card. Is it possible load USB drive in safe mode of windows 7 or any other possibility to connect with remote machine via ssh

    Read the article

  • How to achieve redundancy across data centers?

    - by BrandonBT
    I have a LAMP server with a lot of hardware redundancy built in. I am not worried about the server becoming unavailable. What I am worried about, however, are potential network issues in the data center the server is in. What I would like to have is another server in another data center for redundancy. Load balancing is less of a concern. With that said, I am relatively clueless on two points: How to have two servers in two geographically separate data centers that have exactly the same data, in terms of both files and MySQL databases. How to ensure that all traffic coming into one data center are automatically transferred to the other database in the case of a network or server failure at the first data center. Any guidance on how to accomplish the above two problems would be greatly appreciated.

    Read the article

  • Dark NetBeans

    - by Geertjan
    Let's make NetBeans IDE look like this. Not saying it's a nice color or anything, just that it's possible to do so: I changed the coloring in the Java editor by going to Tools | Options, then chose "Fonts & Colors", then selected the "Norway Today" profile and changed the background setting to Dark Gray. Next, I put this themes.xml file into the "config" folder of the NetBeans IDE user directory, which you can identify as such by going to Help | About in the IDE. Go to the exact location defined by "User directory" in Help | About, and then go to the "config" folder within that folder: The "config" folder of the user directory is the readable/writable root of the NetBeans IDE virtual filesystem. If a themes.xml file is found there, it is used, as described here. Then, in netbeans.conf file, which is not in the NetBeans user directory but in the NetBeans installation directory, within its "etc" folder, I added the following to "netbeans_default_options": -J-Dnetbeans.useTheme=true --laf Metal The first of these enables usage of the themes.xml file, i.e., it notifies NetBeans IDE at startup to load the themes.xml file and to apply the content to the relevant UI components, while the second is needed because most/all of the themes only work if you're using the Metal Look and Feel. Note: I must add that in most cases, whatever it is you're trying to achieve via a themes.xml file can probably be achieved in a different, and better, way. The themes.xml mechanism has been there forever, but is not actively supported or tested, though it may work for the specific thing you're trying to do anyway. For example, if you're trying to change the background color of a TopComponent, use the paintComponent method of the TopComponent instead of using a themes.xml file.

    Read the article

  • Incomplete Ubuntu 12.04 install dual-boot XP

    - by Mike
    This weekend has been the 1st time I've tried to install Ubuntu. On the initial install, (I am using a USB) the installation went all the way through and asked to restart when completed. I was not able to get grub to boot and kept going through Windows. After some research I found some articles on updating/reinstalling grub, so I followed those. I finally got grub to load after a day but there was no Windows option only the Ubuntu 12.04 which when I selected it only gave me a fatal error 17. I booted from the USB again and deleted the partitions and installed again. This time I got an error 15. I then booted through XP and downloaded the WUBI.exe and uninstalled Ubuntu and reinstalled again. The installation went to the very end and then gave an error message (which I don't remember exactly what it said) something along the lines of checking my logs on my C drive. I then uninstalled Ubuntu and removed the wubi.exe file and wiped my USB and did the download to the USB again. Booted through USB and ran the install process again. It again went through the install process but after creating username and password and hitting continue, the installation dialogue box disappears and the mouse spinning wheel is displayed, but I do not receive the prompt to restart. I can still access the side menu for Ubuntu but the wheel keeps spinning. How do I get Ubuntu to install properly?

    Read the article

  • How would I measure the amount of RAM needed per Glassfish domain? [closed]

    - by oligofren
    Possible Duplicate: Can you help me with my capacity planning? In our test environment we have a lot of apps spread out over a few servers and Glassfish domains. To make versioning easier I would have liked to have one Glassfish domain per customer per app (kind of like a heavyweight version of lots of jetty instances). But I have heard that Glassfish is kind of heavy on the resources, and so I would need to measure approximately how many instances would fit in the available RAM. These are low-traffic/low load testing servers, so CPU is not really an issue, though RAM might be. How would I get an approximate measure of how much RAM is needed? This is one Glassfish 3 instance with one heavy EAR application deployed. top? jvmstats? ??

    Read the article

  • Optimising news fetching

    - by aceBox
    I have a web scraper for scraping news from different sources in wp7. My current appraoch for doing this is: load newspapers information from xml file. go to the specified sections and fetch the urls of the news items. go to each url and fetch headline, image, publisher. display using a MVVM architecture of windows phone. The whole thing takes place asynchronously...meaning as soon as url from a section of a newspaper is fetched it is added to the queue, and the second stage consisting of fetching headline, image etc starts... and as soon this is fetched even for one article, it is displayed. Later on as more articles are fetched, they are added on to the list. For the fetching purpose I am using a SmartThreadPool(http://www.codeproject.com/Articles/7933/Smart-Thread-Pool) for windows phone. My problem is that...even for fetching around 80 items (in total) from 9 publications, it is taking more than a minute. How can i speed up the procedure? Note: I have a two stage approach because many times the images are not available with headlines, and are only found in the article.

    Read the article

  • .htaccess URL rewrite to multi-parameter item

    - by MrCS
    I just spent the last 10 hours of my life on this & am running in circles, so was hoping someone may be able to help me. I want a specific URL to load like this: http://example.com/f/2011/image.png?attribute=small When a URL in a format such as this hits, I'd like to rewrite it to hit the server as: http://example.com/generate.php?f=2011/image.png&attribute=small Based on above, my question is two-fold: How can I rewrite the URL in htaccess to meet my requirements above? If the original URL didn't have the attribute query string parameter, how can I ensure attribute will be false/blank/etc when it hits the server via htaccess?

    Read the article

  • So what is Active GridLink for RAC?

    - by Ruma Sanyal
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 I had referred to Active GridLink for RAC in my blog yesterday and since then got several questions on this topic. So I decided to re-visit Active GridLink. With the release of version 11g, Oracle WebLogic Server started to provide strong support for the Real Application Clusters (RAC) features in Oracle Database 11g, minimizing database access time while allowing transparent access to rich pooling management functions that maximizes both connection performance and availability. WebLogic is the only application server in the marketplace which has been fully integrated and certified with Oracle Database RAC 11g without losing any rich functionality. Active GridLink provides Fast Connection Failover (FCF), Runtime Connection Load-Balancing (RCLB), and RAC instance graceful shutdown. With the key foundation for providing deeper integration with Oracle RAC, this single data source implementation in Oracle WebLogic Server supports the full and unrestricted use of database services as the connection target for a data source. For more details and to understand how our customer NEC leverages this capability, read the whitepapers on this topic. Get in depth ‘how-to’ details from this youtube video from our resident expert, Frances Zhao.

    Read the article

  • Influence Maps for Pathfinding?

    - by james
    I'm taking the plunge and am getting into game dev, it's been going well but I've got stuck on a problem. I have a maze that is 100x100 with 0,1 to indicate if its a path or a wall. Within the maze I have 300 or so enemies and a player. The outcome I'm looking for is all the enemies work their way towards the player position. Originally I did this using an A* path finding algorithm but with 300 enemies it was taking forever to path find each one individually. After some research I found that an influence map / collaborative diffusion would be the best way to go. But I'm having a real hard time working out how this is actually done. Firstly.. How do you create a influence map? From what I understand each of my walls with have a scent of 0 so that makes them impassable.. then basically a radial effect from my player position to each other cell (So my player starts at 100 and then going outwards from that each other cell will be reduced value) Is that correct? If so,.. How would you do that (Math magic?) My next problem is if that is correct how would my "enemies" stop from getting stuck if they have gone down the wrong way? As say if my player was standing on the otherside of a wall if the enemy is just looking for larger numbers wont it keep getting stuck? I'm doing this in JavaScript so performance is key. Thanks for any help! EDIT: Or if anyones got a better solution? I've been reading about navmeshs, steering pathing, pre calculating all paths on load etc etc

    Read the article

  • Massive 404 attack with non existent URLs. How to prevent this?

    - by tattvamasi
    The problem is a whole load of 404 errors, as reported by Google Webmaster Tools, with pages and queries that have never been there. One of them is viewtopic.php, and I've also noticed a scary number of attempts to check if the site is a WordPress site (wp_admin) and for the cPanel login. I block TRACE already, and the server is equipped with some defense against scanning/hacking. However, this doesn't seem to stop. The referrer is, according to Google Webmaster, totally.me. I have looked for a solution to stop this, because it isn't certainly good for the poor real actual users, let alone the SEO concerns. I am using the Perishable Press mini black list (found here), a standard referrer blocker (for porn, herbal, casino sites), and even some software to protect the site (XSS blocking, SQL injection, etc). The server is using other measures as well, so one would assume that the site is safe (hopefully), but it isn't ending. Does anybody else have the same problem, or am I the only one seeing this? Is it what I think, i.e., some sort of attack? Is there a way to fix it, or better, prevent this useless resource waste? EDIT I've never used the question to thank for the answers, and hope this can be done. Thank you all for your insightful replies, which helped me to find my way out of this. I have followed everyone's suggestions and implemented the following: a honeypot a script that listens to suspect urls in the 404 page and sends me an email with user agent/ip, while returning a standard 404 header a script that rewards legitimate users, in the same 404 custom page, in case they end up clicking on one of those urls. In less than 24 hours I have been able to isolate some suspect IPs, all listed in Spamhaus. All the IPs logged so far belong to spam VPS hosting companies. Thank you all again, I would have accepted all answers if I could.

    Read the article

  • Apache httpd.conf handle multiple domains to run the same application

    - by John Stewart
    So what we are looking for is the ability to do the following: We have an application that can load certain settings based on the domain that it is being accessed from. So if you come from xyz.com we show a different logo and if you come from abc.com we show a different logo. The code is the same, running from same server just detects the domain on the run Now we want to get a dedicated server (any suggestions?) that will enable us to point all the doamins that we want to this server (we change the DNS for the domains to that of our server) and then when the user goes to a certain domain they run the same application. Now as far as I can understand we will need to create a "VirtualHost" in apache to handle this. Can we create a wildcard virtualhost that catches all the domains? I am not an expert with Apache at all. So please forgive if this comes out to be a silly question. Any detailed help would be great. Thanks

    Read the article

  • Planning office network [closed]

    - by gakhov
    I'm planning to setup my office network from scratch and want to ask professional opinions or tips. My office is connected to Internet with Cable connection (100Mb/s). The devices i would like to connect are VoIP Phone (RJ-11), TV (WiFi/LAN), 3 laptops (WiFi), a few smartphones (WiFi), iPad (WiFi), Kindle (WiFi) and, probably, MediaServer (WiFi/LAN). As you can see, the most load will be on WiFi connections (probably, even if TV supports WiFi it's better to connect it by LAN?). So, i need help to choose the best routers combination (or even one?) to support stable connections for all these devices and minimize the total number of routers/adapters. Any thoughts? Thank you!

    Read the article

  • SNMP - So I have a MIB. Now What?

    - by senfo
    I can't seem to get my head wrapped around the purpose of a MIB. I have a collection of ~20 MIB files that were supplied to me by the vendor, but what do I do with them? I also have a few OID's that were supplied by the vendor that don't seem to be valid. When I issue an "snmpget -v1 -c public 192.168.0.123 .1.4.6.3.2.6.2" (assume that's a valid OID), I get an error indicating the variable is unknown. Does this sound like a hardware configuration problem? Do I need to "load" (for lack of better words) the MIB into the device? Unfortunately, the vendor has been completely unresponsive with returning emails to my questions, so any help would be greatly appreciated.

    Read the article

  • Problem accessing the remote working space on my new SBS 2008 box

    - by Dabblernl
    This supposedly easy to install OS is starting to drive me nuts... SYMPTOMS: When trying to connect to the remote workplace I get (and ignore) the security warning because I am currently testing with the self issued certificate. After loggin in the remote workplace's main screen displays but the images on it do not load. When I try to click the email link I am thrown back to the login screen. If I try the login to exchange directly by typing in the remote.mydomain.com/owa address I get a 403 error that I am denied access. The problem occurs on both a vista and a win 7 machine. It seems that some security setting is playing tricks with me. How can I troubleshoot this?

    Read the article

  • Restart single uWSGI application (when it's in emperor mode)

    - by Oli
    I'm running uWSGI in emperor mode to host a bunch of Django sites based on their individual configs. These are supposed to update when it detects a change in the config file and this largely works when I just touch uwsgi.ini the relevant file. But occasionally I'll mess something up in the Django site and the server won't load. Yeah, yeah, I should be testing better but that's not really the point. When this happens, uWSGI seems to mark the site as dead and stops trying to run it (seems to make sense). Even after I fix the underlying issue, no amount of touching will get that site's uWSGI process up and running. I have to reload the whole uWSGI server (knocking dozens of sites out at once for a few seconds). Is there a way to force uWSGI to just reload one of its sites?

    Read the article

  • What guidelines should be followed when implementing third-party tracking pixels?

    - by Strozykowski
    Background I work on a website that gets a fair amount of traffic, and as such, we have implemented different tracking pixels and techniques across the site for various specific reasons. Because there are many agencies who are sending traffic our way through email campaigns, print ads and SEM, we have agreements with a variety of different outside agencies for tracking these page hits. Consequently, we have tracking pixels which span the entire site, as well as some that are on specific pages only. We have worked to reduce the total number of pixels available on any one page, but occasionally the site is rendered close to unusable when one of these third-party tracking pixels fails to load. This is a huge difficulty on parts of the site where Javascript is needed for functionality built into the page, but is unable to initialize until a 404 is returned on the external tracking pixel. (Sometimes up to 30 seconds later) I have spent some time attempting to research how other firms deal with this sort of instability with third-party components, but have come up a bit short. The plan currently is to implement our own stop-gap method to deal with these external outages, but rather than reinventing the wheel, we wanted to find out how this is dealt with on other sites. Question Is there a good set of guidelines that should be followed when implementing third-party tracking pixels? I would love to see some white papers or other written documents about how other people have dealt with this issue.

    Read the article

  • 2 workstations won't connect to most websites, but will connect to some

    - by Dean
    I have a very frustrating issue I wasn't able to solve: 2 workstations which are used by the same user are not able to connect to most websites receivin a timeout, however they will load some websites specificly from my country. They are able to get the website addresses via DNS. Both stations have their internet connection through a remote router. Other stations in the same LAN are connecting fine. Here's what I tried: Virus scan Renewing IPs Reseting the workstations Moving one workstation to a different RJ-45 in the wall Reseting the hub and switch Checking the hosts file DNS flush Nothing seems to help. I am preparing a CD with more AV tools to see if there's anything hiding on the stations. UPDATE: It was an incorrect configuration in "Internet Options". I configured the correct proxy and now it works.

    Read the article

  • Differentiating between user script input formats

    - by KChaloux
    I have a .NET project at work that provides a couple of (Iron)Python scripts to the customers, to allow them to customize the output of the program. The application generates code for certain machines, and supports a couple of different formats. Until recently, we only provided a script for one format. We're expanding upon that to include support for the others. If the user is using a script, they select their input script before generating the output code. A script designed for Format1 output is going to cause errors if they're trying to generate Format2 output. I need to deal with this. One option would just be to let the customers use common sense, and if they load the wrong script it will just fail, or worse, produce inaccurate data. I'm inclined to provide a little more protection than that. At the moment I'm considering putting a shebang-style comment line at the top of the script, ala: # OUTPUT - Format1 If the user tries to run a Format2 process with a Format1 script, it will warn them. Alternatively I could create different file extensions for the input scripts that vary by type. The file-type comment approach helps prevent the script from actually loading improperly, at the cost of failing to warn the user until they've already selected it, via a dialog box. Using different file extensions would allow me to cut down on visual clutter when providing a File Dialog, but doesn't actually stop them from loading the wrong script. So I'm really not sure if the right approach is to just leave it alone, or provide some safeguards.

    Read the article

  • Is it possible to view Facebook news feeds page by page rather than loading it all as I scroll?

    - by oscilatingcretin
    If you want to scroll through your Facebook news feed (be it on the main feed, your personal feed, or in groups) to older parts, you have to scroll to the bottom, wait for the ajax load of the next part of the feed, and repeat. The problem with this is that, if you're scrolling down very far, the HTML document just gets bigger and bigger until your browser starts to die due to the overload of resources brought on by added HTML, text, and even images. This pretty much sets a limit to how far back you can scroll. Clicking on months and years on your personal feed has the same effect of cumulatively adding feed segments to the HTML document. I notice that this month/year feature is not available on the main feed and for groups. If there were a way to literally page through the feed so that only a single page's worth of feed data is loaded at a time, that would make scrolling through it much more doable.

    Read the article

  • SQL Server Reporting authentication not working

    - by Keith
    I'm not exactly sure what went wrong but our SQL Server Reporting Services authentication is no longer working correctly. When I try to load the site, it asks for a username and password, and mine doesn't work. I checked the service and it is using the NT AUTHORITY\NetworkService to logon. Since it is using NetworkService to logon, I read on Microsoft's site that I need to use these settings in the RSReportServer.config file: <AuthenticationTypes> <RSWindowsNegotiate /> </AuthenticationTypes> <EnableAuthPersistence>true</EnableAuthPersistence> Which is what I have set. It still asks for the password. When I set the Authentication to RSWindowsNTLM, it does login but everytime I click on a link, it asks for a password (the password doesn't seem to prevent anything from loading). Anyone know what is going on here? I'm not an expert to SQL Server so I may be missing something.

    Read the article

  • Open original Microsoft Office document (not "version 1") on Mac OS X Lion restart

    - by FlyingMolga
    My MacBook Pro running Lion has been frequently freezing lately, and I've had to restart with the power button. When Lion starts up again, the Microsoft Office applications that were running start and load different autosaved versions of the documents I had open (i.e. it does not open abc.xlsx but [version 1] of abc.xlsx). Sometimes it also opens the original files. Several times I've inputted data into these "version 1" files, only to try to save it and realize that it isn't the original file and is sometimes missing data that is contained in the original file. Is there any way to make autorecover open the actual document with the unsaved changes, instead of making a new temporary version?

    Read the article

  • Oracle Enterprise Manager 12c Testing-as-a-Service Solution

    - by user810030
    With organizations spending as much as 50 percent of their QA time with non-test related activities like setting up hardware and deploying applications and test tools, the cloud will bring obvious benefits. A key component of Oracle Enterprise Manager our current Application Quality Management products have been helping our customers with application load testing, functional testing and test process management, but also test data management, data masking and real application testing. These products enable customers to thoroughly test applications and their underlying infrastructure to help ensure the best quality, scalability and availability prior to deployment.  Today, Oracle announced Oracle Enterprise Manager 12c Testing-as-a-Service Solution . This solution will allow users to significantly decrease the time needed to setup a complete test environment, while enhancing testing efficiency. Please read the Press Release mentioned above and join us in our Enterprise Manager LinkedIn Group discussion on this topic. (need to be a member). Or visit our booth this week during the EuroSTAR Software Testing conference in Amsterdam where we can demo this solution  I hope you find this helpfull Stay Connected: Twitter |  Facebook |  YouTube |  Linkedin |  Newsletter

    Read the article

  • What is the average page size for single page application (SPA)? [on hold]

    - by Emmanuel Istace
    I'm developing a single page application with a lot of css & javascript. For now the page is 1.3Mo composed by 5 section. Here are the rounded stats : Document : 10kb Style : 60kb Images : 450 kb (already compressed, include a big gallery thumbnails) Javascript : 700kb - 600kb of "framework" (jquery, jquery-ui, boostrap, modernizer, waypoint, ...) and 100kb of custom js. Fonts : 125kb And the site is not finished yet. (Will include gmap api, and some others...) My questions are : Do you have any statistics about the average weight of an SPA? As this is the whole website, do you think it's acceptable? Is lazy load (for images) a solution? What will be impact for SEO ? Is the "200kb rule" of google still relevant? Do you know great tools to detect which javascript code is not used during the the exection of a page and then the availability to optimize these 700kb of framework js stuffs? Can a caching strategy be an answer?

    Read the article

  • Touchpad won't work/not recognized on laptop, Ubuntu 11.10

    - by The_McManus_Position
    When I start out running 11.10 (Ocelot) from the Live CD the touchpad works with no problem. Once installing however the touchpad does not work at all. I have installed all updates available. I have tried installing the synaptiks touchpad package and the gpointer package from the software manager, this did not help. Under System Settings - Mouse and Touchpad there is only a mouse tab, no touchpad tab. It seems my touchpad is not even recognized. I have tried running several things in the terminal that I have found under similar questions, but I always get errors about the files not being found, or the drivers not being found. Have tried: sudo modprobe -r psmouse sudo modprobe psmouse proto=imps This results in an error message. dconf-editor Can load the GUI, but after that I can't seem to navigate through it to find the files that need editing. Read what's listed here. But I've got no GRUB, running only Ubuntu. As you can see I've been trying several things and nothing has worked so far, would appreciate any help. Thanks!

    Read the article

< Previous Page | 410 411 412 413 414 415 416 417 418 419 420 421  | Next Page >