Search Results

Search found 26978 results on 1080 pages for 'load testing'.

Page 332/1080 | < Previous Page | 328 329 330 331 332 333 334 335 336 337 338 339  | Next Page >

  • How to resolve concurrent ramp collisions in 2d platformer?

    - by Shaun Inman
    A bit about the physics engine: Bodies are all rectangles. Bodies are sorted at the beginning of every update loop based on the body-in-motion's horizontal and vertical velocity (to avoid sticky walls/floors). Solid bodies are resolved by testing the body-in-motion's new X with the old Y and adjusting if necessary before testing the new X with the new Y, again adjusting if necessary. Works great. Ramps (rectangles with a flag set indicating bottom-left, bottom-right, etc) are resolved by calculating the ratio of penetration along the x-axis and setting a new Y accordingly (with some checks to make sure the body-in-motion isn't attacking from the tall or flat side, in which case the ramp is treated as a normal rectangle). This also works great. Side-by-side ramps, eg. \/ and /\, work fine but things get jittery and unpredictable when a top-down ramp is directly above a bottom-up ramp, eg. < or > or when a bottom-up ramp runs right up to the ceiling/top-down ramp runs right down to the floor. I've been able to lock it down somewhat by detecting whether the body-in-motion hadFloor when also colliding with a top-down ramp or hadCeiling when also colliding with a bottom-up ramp then resolving by calculating the ratio of penetration along the y-axis and setting the new X accordingly (the opposite of the normal behavior). But as soon as the body-in-motion jumps the hasFloor flag becomes false, the first ramp resolution pushes the body into collision with the second ramp and collision resolution becomes jittery again for a few frames. I'm sure I'm making this more complicated than it needs to be. Can anyone recommend a good resource that outlines the best way to address this problem? (Please don't recommend I use something like Box2d or Chipmunk. Also, "redesign your levels" isn't an answer; the body-in-motion may at times be riding another body-in-motion, eg. a platform, that pushes it into a ramp so I'd like to be able to resolve this properly.) Thanks!

    Read the article

  • How to troubleshoot a wireless networking regression?

    - by fluteflute
    I've been experiencing this perhaps slightly odd bug. It works flawlessly in Lucid, but not in Maverick and Natty. I find it seems to work when I'm booting a partition everyday (as I do for my main 10.10 partition) but for my 11.04 testing partition it's a real pain - usually refusing to connect. So given that I have both a working (10.04) and not-working (10.10 and 11.04) installs, how can I troubleshoot my problem?

    Read the article

  • Where to place the R code for R+Sweave+LaTeX workflow

    - by claytontstanley
    I spent the last week learning 3 new tools: R, Sweave, and LaTeX. One question that came to my mind though when working through my first project: Where do I place the majority of the R code? The tutorials that I read online placed the majority of the R code in the LaTeX .Rnw file. However, I find having a bunch of R calculations in the LaTeX file distracting. What I do find extremely helpful (of course) is to call out to R code in the LaTeX file and embed the result. So the workflow I've been using is to place 99% of my R code in my .R file. I run that file first, save a bunch of calculations as objects, and output the .Rout file once finished (to save the work). Then when running Sweave, I load up that .Rout file, so that I have the majority of my calculations already completed and in the Sweave R session. Then my LaTeX callouts to R are quite simple: Just give me the XTable stored in 'res.table', or give me the result of an already-computed calculation stored in the variable 'res'. So I push towards the minimal amount of R code in the LaTex file possible, to achieve the desired result (embedding stats results in the LaTeX writeup). Does anyone have any experience with this approach? I'm just worried I might run into trouble further down the line, when I start really trying to load up and leverage this workflow.

    Read the article

  • Recent update killed unity 3d launcher

    - by Steve
    I am scratching my head on this one, a lot of things are still new to me. I updated 126 packages just now through the update manager, and upon reboot everything works fine except the unity launcher. It's just a dark space. The dash still works, as does the top panel and docky. When I try: unity --replace I end up with this and then an indefinite hang: (compiz:3689): GConf-CRITICAL **: gconf_client_add_dir: assertion `gconf_valid_key (dirname, NULL)' failed WARN 2012-09-23 02:18:29 unity.favorites FavoriteStoreGSettings.cpp:139 Unable to load GDesktopAppInfo for 'ubiquity-gtkui.desktop' WARN 2012-09-23 02:18:30 unity.favorites FavoriteStoreGSettings.cpp:139 Unable to load GDesktopAppInfo for 'ubuntuone-installer.desktop' ERROR 2012-09-23 02:18:30 unity.launcher.trashlaunchericon TrashLauncherIcon.cpp:62 Could not create file monitor for trash uri: Operation not supported Initializing unityshell options...done WARN 2012-09-23 02:18:31 unity.libindicator <unknown>:0 Desktop file '/usr/share/applications/libreoffice-writer.desktop' is using a deprecated format for its actions that will be dropped soon. WARN 2012-09-23 02:18:31 unity.libindicator <unknown>:0 Desktop file '/usr/share/applications/libreoffice-calc.desktop' is using a deprecated format for its actions that will be dropped soon. WARN 2012-09-23 02:18:31 unity.libindicator <unknown>:0 Desktop file '/usr/share/applications/libreoffice-impress.desktop' is using a deprecated format for its actions that will be dropped soon. Setting Update "main_menu_key" Setting Update "run_key" Unfortunately I cannot make heads or tails of this. Anyone, please help?

    Read the article

  • Android Card Game Database for Deck Building

    - by Singularity222
    I am making a card game for Android where a player can choose from a selection of cards to build a deck that would contain around 60 cards. Currently, I have the entire database of cards created that the user can browse. The next step is allowing the user to select cards and create a deck with whatever cards they would like. I have a form where the user can search for specific cards based off a few different attributes. The search results are displayed in a List Activity. My thought about deck creation is to add the primary key of each card the user selects to a SQLite Database table with the amount they would like in the deck. This way as the user performs searches for cards they can see the state of the deck. Once the user decides to save the deck. I'll export the card list to XML and wipe the contents of the table. If the user wanted to make changes to the deck, they would load it, it would be parsed back into the table so they could make the changes. A similar situation would occur when the eventually load the deck to play a game. I'm just curious what the rest of you may think of this method. Currently, this is a personal project and I am the only one working on it. If I can figure out the best implementation before I even begin coding I'm hoping to save myself some time and trouble.

    Read the article

  • vpn/Openvpn as a cloud service

    - by 8pipe
    I am working on creating a small cloud (any number of EC2 instances that can be deployed based on load) implementing a VPN as a service for the company I'm working for. This is basically a project gathering together various vpn resources under one aegis as a cloud based service. As a user of openvpn, I'm somewhat familiar with being able to connect, but I'm looking for resources to start this project. Essentially I need to be able to: run a certificate authority and manage keys to distribute to coworkers build an ami that handles openvpn as a service balance the load if necessary among machines instances as needed Any suggestions for tutorials, things to avoid, roadblocks I might not be seeing from a novice perspective, etc. or just help in visualizing this is appreciated.

    Read the article

  • Debian Linux server hangs after a week or so

    - by Alex Flo
    I have 2 Debian Linux 6.0.4 servers that have a strange behaviour: after 5-7-10 days they hang. By this I mean the servers need to be restarted and before that ping won't answer. I've been struggling with this problem for a couple of months now and here's some thoughts/what I tried without being able to solve the problem. I changed the RAM on a server. Being 2 different servers I doubt that it could be something related to hardware as a 3rd identical server won't have this problem. I logged the server load and when it crashes the load is fine (quite low) I cannot find anything in the server logs, logs are fine till the server freezes. I don't have access to console unfortunately. While I have years of admin experience I have never encountered such an issue and right now I have no idea where else to investigate. If you have an idea of what I could try in order to fix the problem please share it with me:-)

    Read the article

  • Using PDO with MVC

    - by mister martin
    I asked this question at stackoverflow and received no response (closed as duplicate with no answer). I'm experimenting with OOP and I have the following basic MVC layout: class Model { // do database stuff } class View { public function load($filename, $data = array()) { if(!empty($data)) { extract($data); } require_once('views/header.php'); require_once("views/$filename"); require_once('views/footer.php'); } } class Controller { public $model; public $view; function __construct() { $this->model = new Model(); $this->view = new View(); // determine what page we're on $page = isset($_GET['view']) ? $_GET['view'] : 'home'; $this->display($page); } public function display($page) { switch($page) { case 'home': $this->view->load('home.php'); break; } } } These classes are brought together in my setup file: // start session session_start(); require_once('Model.php'); require_once('View.php'); require_once('Controller.php'); new Controller(); Now where do I place my database connection code and how do I pass the connection onto the model? try { $db = new PDO('mysql:host='.DB_HOST.';dbname='.DB_DATABASE.'', DB_USERNAME, DB_PASSWORD); $db->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION); } catch(PDOException $err) { die($err->getMessage()); } I've read about Dependency Injection, factories and miscellaneous other design patterns talking about keeping SQL out of the model, but it's all over my head using abstract examples. Can someone please just show me a straight-forward practical example?

    Read the article

  • Coldfusion server VERY slow page loads

    - by Kevin
    I inherited a windows server 2003 coldfusion 7 server a few weeks ago. Today a network cable was unplugged by accident from the server. On plugging it back in, pages were NOT loading at all. Rather, we were receiving a generic coldfusion error page. After restarting IIS several times and coldfusion even more than that, we finally got pages to start loading. However, the loading is extremely slow (30+ seconds) on pages that used to load instantly. Loading through the local network (IE localhost/cfide/administrator) does nothing to help the load speed. I am not familiar with IIS or Coldfusion (We're in the process of migrating this to Linux/PHP), so this is all new territory to me. I'm hoping someone may have experienced this issue in the past and can help me solve it. I'm happy to provide any additional information that might be necessary....I'm just not sure what information you might need in order to help. Thanks for your time.

    Read the article

  • Website Ethics / legal issues, image copyrights

    - by RailsN00b
    Ignoring the technical implementation of a website for a second, assume a website that is similar to twitter but with pictures. A user say something and puts a picture of whatever they said. As the nature of the internet, the images will most likely not be his/hers image. There are 2 options that I see for dealing with this: 1. The user will post a URL of the picture and the website will pull the picture from that URL everytime someone enters that page 2. The website will save the image in its own database of images and display the image to the visitors 'locally' The problem with option #1, while it saved storage, I see an issue with 'stealing' other websites bandwidth and if my website has many many visitors it could cost the image-hosting websites a lot and possible even crash it if the server can't handle the load. The problem with opion #2, while it saves the load to other websites, it practically takes pictures that could have copyright on them. Which option is better to implement in terms of legal issues and ethics? When do I need to contact another website to request permission to use the images from that site? Does anyone really care about that anymore. Where can I read about this?

    Read the article

  • Running an rsync sweep before initializing lsyncd for synchronizing instances on EC2

    - by chrisallenlane
    My company uses several EC2 servers that will scale up and down according to the load we're receiving on our sites at any given moment. For the sake of our discussion here, we're running four instances: master.ourdomain.com - the file syncing "hub" of the webservers www1/www2/www3.ourdomain.com - three webservers which turn on or off as dictated by load I'm using lsyncd to keep all of the webservers in sync, and for the most part, it's working quite well. We're using a two-way syncing scheme, such that each webserver syncs against master, and master syncs against each webserver. Thus, the webservers are kept in sync, even though they aren't syncing against each other directly. I'm having one problem that I'm having a hard time solving,though. It occurs under these circumstances: When changes are made on master (perhaps after we've pushed new code), while some of the redundant webservers are sleeping And then a sleeping webserver wakes-up to absorb load Under that circumstance, I would like the following to happen: First, the newly-awoken webserver should sync its file structure - one way - against master, to bring its web application code up-to-date. Then, and only then, should it begin pushing changes in its file structure back to master. Unfortunately, currently, when a sleeping server is started, when lsyncd starts up, it pushes changes back to master before updating its own codebase, thus overwriting new code with old. Thus, before lsyncd starts, I'd like to be able to synchronize the webservers code against master's, perhaps by running a simple one-way rsync against the two machines. We're running lsyncd v.2, and I've tried to make this happen by using the "bash" configuration options documented in the lsyncd manual. My configuration file looks like this: settings = { logfile = "/home/user/log/lsyncd/log.txt", statusFile = "/home/user/log/lsyncd/status.txt", maxProcesses = 2, nodaemon = false, } bash = { onStartup = "rsync [email protected]:/home/user/www /home/user/www" } sync{ default.rsyncssh, source="/home/user/www/", host="[email protected]", targetdir="/home/user/www/", rsyncOpts="-ltus", excludeFrom="/home/user/conf/lsyncd/exclude" } (I've obviously redacted that file somewhat to protect the identities of the guilty.) Simply put, though, this just isn't working. How else might I approach this problem? I was looking at the --delete-after option in man rsync, but I don't think that does what I'm looking for. Are there any suggestions about how I should approach this problem? Thanks for lending your time and expertise. Chris

    Read the article

  • Oracle University: Fusion Middleware Certification News

    - by rituchhibber
    The following exam has recently has recently gone into Production: Title and exam code Certification Track Oracle Fusion Middleware 11g: Build Applications with Oracle Forms Oracle Certified Professional, Oracle Fusion Middleware 11g Forms Developer Full preparation details are available on the exam page, including prerequisites for this certification, exam topics and pricing. Remember: Your OPN discount is applied to the standard pricing shown on the website. Exams can be taken at an Oracle Test Center near you or at any Pearson VUE Testing Center.

    Read the article

  • hyperv machine guest loads slow

    - by Dani Avni
    this is by far one of the strangest things I have seen. I have a win 2008R2 cluster with a CSV. the CSV itself is on an iSCSI storage (hitachi HUS 110) basic config of the two hosts in the cluster is Dell R610 Win 2008 R2 with all patches 64GB 1 NIC for host access 2 NICs for guest access 2 NICs for iSCSI these machine work great and I can load a 2008R2 test guest machine on them in less than 90 seconds after the above config is running for over a year, I now need to add a new host. now the host is Dell R620 (Still intel but different CPU) Win 2008 R2 with all patches 64GB 1 NIC for host access 2 NICs for guest access 2 NICs for iSCSI I added this new host to the domain and to the cluster, I gave it access to the CSV and I tried loading the same guest machine that loads in 90 seconds in the other hosts. the machine loads in about 6 minutes. no matter how many times I try this the old hosts load the machine in about 90 seconds and this new host in around 6 minutes to eliminate any problems with the iSCSI connection, I added a new LUN and directly accessed it from the new host and I was working at around 300MB/s so no problem there. I also tested the connection between the other hosts and the new one and network is working fine there too. to eliminate problems in HyperV, I copied the machine to the local disk of the new host and it loaded in less than 20 seconds. now is the point were things get a lot stranger: in my tests I tried installing a fresh windows guest machine to the CSV from the new host. I noticed that while the fresh windows was installing, my test guest was loading in less than 90 seconds even on the new host (I repeated this a few times). If I paused the fresh install guest and tried loading the test guest again it loaded in 6 minutes. and again after I resumed the guest installation the test guest loaded fast. after the fresh windows was also loaded, I ran tests loading the fresh window and my test machine. each one of them loaded in about 5 minutes when I tried loading them separately. however when I started both of them in the same time they both loaded in around 2.5 minutes it seems that the iSCSI disk access is only working if it is under some load (although I never got to above 10% utilization according to the task manager) does anyone have any idea what could be the problem?

    Read the article

  • Chrome developer tools - network panel gaps

    - by Chris Nicholson
    In the Chrome developer tools, under the network tab, I'm curious to know what is happening during the gaps. If you look at my image below, I have highlighted in orange the areas where these gaps exist. Where I'm able to load a lot of my page from cache it's a shame these large gaps occur as they make up most of my page load time. What exactly is happening in this time? EDIT Okay I found this answer which essentially sums up my question, so a different question: does anyone know a good method to reduce the length of these gaps? Presumably (albeit rather extreme) if I loaded all my CSS on the page there wouldn't be a delay after loading the CSS file before the images were loaded.

    Read the article

  • Ubuntu UK Podcast: Their Purple Moment

    <b>Ubuntu UK Podcast:</b> "We interview the awesome Stuart Langridge and discuss the Ubuntu One Music Store, beta testing, record tokens, Rhythmbox, MP3s, Britney Spears, file syncing, customer service, getting music into the store and Severed Fifth, Frequently Asked Questions, vinyl, reaching &#8216;real&#8217; people and Shot of Jaq."

    Read the article

  • Windows 8 BIOS - Boot Ubuntu from External HDD

    - by F3AR3DLEGEND
    My laptop came pre-loaded with Windows 8 64-bit (only storage device is a 128 GB SSD). Since it is my school laptop/I've heard creating a Linux partition alongside Windows 8 is not very wise I installed Ubuntu onto my external hard drive. I have a 500GB external HDD with the following partitions: Main Partition - NFTS - ~400 GB Extension Partition / - ext2 - ~25gb /home - ext2 - ~30gb swap - ext2 - 10gb /boot - ? - 10gb ? = not sure of partition Using the PenDriveLinux installer, I created a LiveUSB version of Ubuntu 12.04 (LTS) on a 4GB USB drive. Using that, I installed Ubuntu onto the external hard-drive, without any errors (or at least none that I was notified of). Using the BIOS settings, I changed the OS-loading order so that it is in this order: My External USB HDD Windows Boot Loader Some other things Therefore, Ubuntu should load from my hard drive first, but it doesn't. Also, my hard drive is in working condition, and it turns on when BIOS starts (there is a light indicator). When I start my laptop, it goes directly to Windows 8 (I have the fast startup setting disabled as well). So, is there any way for me to set it up so that when my HDD is connected, it will automatically load Ubuntu? Thanks in advance!

    Read the article

  • Incorporating libs into module pattern

    - by webnesto
    I have recently started using require.js (along with Backbone.js, jQuery, and a handful of other JavaScript libs) and I love the module pattern (here's a nice synopsis if you're unfamiliar: http://www.adequatelygood.com/2010/3/JavaScript-Module-Pattern-In-Depth). Something I'm running up against is best practices on incorporating libs that don't (out of the box) support the module pattern. For example, jQuery without modification is going to load into a global jQuery variable and that's that. Require.js recognizes this and provides an example project for download with a (slightly) modified version of jQuery to incorporate with a require.js project. This goes against everything I've ever learned about using external libs - never modify the source. I can list a ton of reasons. Regardless, this is not an approach I'm comfortable with. I have been using a mixed approach - wherein I build/load the "traditional" JS libraries in a "traditional" way (available in the global namespace) and then using the module pattern for all of my application code. This seems okay to me, but it bugs me because one of the real beauties of the module pattern (no globals) is getting perverted. Anyone else got a better solution to this problem?

    Read the article

  • organization of DLL linked functions

    - by m25
    So this is a code organization question. I got my basic code working but when I expand it will be terrible. I have a DLL that I don't have a .lib for. Therefore I have to use the whole loadLibrary()/getprocaddress() combo. it works great. But this DLL that i'm referencing at 100+ functions. my current process is (1) typedef a type for the function. or typedef short(_stdcall *type1)(void); then (2) assign a function name that I want to use such as type1 function_1, then (3) I do the whole LoadLibrary, then do something like function_1 = (type1)GetProcAddress(hinstLib, "_mangled_funcName@5"); normally I would like to do all of my function definitions in a header file but because I have to do use the load library function, its not that easy. the code will be a mess. Right now i'm doing (1) and (2) in a header file and was considering making a function in another .cpp file to do the load library and dump all of the (3)'s in there. I considered using a namespace for the functions so I can use them in the main function and not have to pass over to the other function. Any other tips on how to organize this code to where it is readable and organized? My goals are to be able to use function_1 as a regular function in the main code. if I have to a ref::function_1 that would be okay but I would prefer to avoid it. this code for all practical purposes is just plane C at the moment. thanks in advance for any advice!

    Read the article

  • Intel GMA 500 support for 11.10

    - by lucazade
    I would like to know if the new open-source video driver included in kernel 3.0.x for the Intel GMA 500 will be included by default in the kernel that will be shipped in Oneiric Ocelot. The driver support of this GFX chipset has always been poor and mainly community-driven, now finally we have a KMS open-source driver, written by kernel hackers, and actually included in staging kernel repo. If there is any kind of testing needed there is a mega-thread on Ubuntu Forums with hundreds of users ready to test everything.

    Read the article

  • PHP compiled on Mac OSX 10.6 - using /usr/lib when trying to start apache... rather than /opt/local/lib specified when php was configured

    - by Anthony
    PHP 5.3.3 compiled on Mac OSX 10.6 - using /usr/lib when trying to start apache... rather than /opt/local/lib specified when php was configured Why is it trying to load from /usr/lib when I specified in my configure not to? httpd: Syntax error on line 115 of /private/etc/apache2/httpd.conf: Cannot load /usr/libexec/apache2/libphp5.so into server: dlopen(/usr/libexec/apache2/libphp5.so, 10): Library not loaded: /opt/local/lib/libiconv.2.dylib\n Referenced from: /usr/libexec/apache2/libphp5.so\n Reason: Incompatible library version: libphp5.so requires version 8.0.0 or later, but libiconv.2.dylib provides version 7.0.0 The error message above refers to /opt/local/lib which when I run: otool -LD /opt/local/lib/libiconv.2.dylib /opt/local/lib/libiconv.2.dylib: /opt/local/lib/libiconv.2.dylib (compatibility version 8.0.0, current version 8.0.0) /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 125.0.0) It shows that the version is different than what http is erring out as.

    Read the article

  • DNS settings for SaaS in the cloud?

    - by Jeremy
    I am building a SaaS product. When a user signs up for an account they must select an alias for their site --------.getlaunchpoint.com. Right now I have an A record *.getlaunchpoint.com that points to the ip address server. However, with Azure I am not given an IP address. The suggested implementation is to make use of a CNAME. I need to create a CNAME for *.getlaunchpoint.com - getlaunchpoint.cloudapp.net GoDaddy does not support CNAME wildcards. Searching on Google I'm getting conflicting information... is CNAME wildcard a bad practice? I run into the same problem with Amazon EC2 if I want to make use of load balancers because you cannot tie a public IP address to an Amazon Load Balancer. Amazon also suggests the use of a CNAME. Any help would be appreciated.

    Read the article

  • Getting the PC speaker to beep

    - by broiyan
    There has been much written on getting the beep sound from Ubuntu releases over the years. Example: fixing the beep My needs are slightly different in that I do not want to ensure sound card beeps are functioning. Instead, I want PC speaker beeps, the kind produced by the original built-in speaker because I believe they will produce less CPU load. I have confirmed that my computer has the PC speaker by unplugging the external speakers and shutting down Ubuntu. At some point in the shutdown and restart process a beep is heard even though the external speakers have no power. I have tried the following: In /etc/modprobe.d/blacklist.conf, turn these lines into comments: #blacklist snd_pcsp #blacklist pcspkr In .bashrc /usr/bin/xset b on /usr/bin/xset b 100 Enable in the gnome terminal: Edit Profile Prefs General Terminal Bell Ensure no "mute" selections in: System Prefs Sound various tabs (uncheck them all). Select "Enable window and button sounds" in: System Prefs Sound Sound Effects In gconf-editor desktop gnome sound, select the three sound check boxes. In gconf-editor apps metacity general select the audible bell check box. Still I get no PC speaker beeps when I send code 7 to the console via my Java program or use echo -e '\a' on the bash command line. What else should I try? Update Since my goal is to minimize load on the CPU, here is a comparison of elapsed times. Each test is for 100,000 iterations. Each variant was performed three times so three results are presented for each. printwriter.format("%c", 7); // 1.3 seconds, 1.5 seconds, 1.5 seconds Toolkit.getDefaultToolkit().beep(); // 0.8 seconds, 0.3 seconds, 0.5 seconds try { Runtime.getRuntime().exec("beep"); } catch (IOException e) { } // 10.3 seconds, 16.3 seconds, 11.4 seconds These runs were done inside Eclipse so multiply by some value less than 1 for standalone execution. Unfortunately, Toolkit's beep is silent on my computer and so is code 7. The beep utility works but has the most cost.

    Read the article

  • Should EICAR be updated to test the revision of Antivirus system?

    - by makerofthings7
    I'm posting this here since programmers write viruses, and AV software. They also have the best knowledge of heuristics and how AV systems work (cloaking etc). The EICAR test file was used to functionally test an antivirus system. As it stands today almost every AV system will flag EICAR as being a "test" virus. For more information on this historic test virus please click here. Currently the EICAR test file is only good for testing the presence of an AV solution, but it doesn't check for engine file or DAT file up-to-dateness. In other words, why do a functional test of a system that could have definition files that are more than 10 years old. With the increase of zero day threats it doesn't make much sense to functionally test your system using EICAR. That being said, I think EICAR needs to be updated/modified to be effective test that works in conjunction with an AV management solution. This question is about real world testing, without using live viruses... which is the intent of the original EICAR. That being said I'm proposing a new EICAR file format with the appendage of an XML blob that will conditionally cause the Antivirus engine to respond. X5O!P%@AP[4\PZX54(P^)7CC)7}$EICAR-EXTENDED-ANTIVIRUS-TEST-FILE!$H+H* <?xml version="1.0"?> <engine-valid-from>2010-1-1Z</engine-valid-from> <signature-valid-from>2010-1-1Z</signature-valid-from> <authkey>MyTestKeyHere</authkey> In this sample, the antivirus engine would only alert on the EICAR file if both the signature or engine file is equal to or newer than the valid-from date. Also there is a passcode that will protect the usage of EICAR to the system administrator. If you have a backgound in "Test Driven Design" TDD for software you may get that all I'm doing is applying the principals of TDD to my infrastructure. Based on your experience and contacts how can I make this idea happen?

    Read the article

  • How do I maintain a really poorly written code base?

    - by onlineapplab.com
    Recently I got hired to work on existing web application because of NDA I'm not at liberty to disclose any details but this application is working online in sort of a beta testing stage before official launch. We have a few hundred users right now but this number is supposed to significantly increase after official launch. The application is written in PHP (but it is irrelevant to my question) and is running on a dual xeon processor standalone server with severe performance problems. I have seen a lot of bad PHP code but this really sets new standards, especially knowing how much time and money was invested in developing it. it is as badly coded as possible there is PHP, HTML, SQL mixed together and code is repeated whenever it is necessary (especially SQL queries). there are not any functions used, not mentioning any OOP there are four versions of the app (desktop, iPhone, Android + other mobile) each version has pretty much the same functionality but was created by copying the whole code base, so now there are some differences between each version and it is really hard to maintain the database is really badly designed, which is causing severe performance problems also for fixing some errors in PHP code there is a lot of database triggers used which are updating data on SELECT and on INSERT so any testing is a nightmare Basically, any sin of a bad programming you can imagine is there for example it is not only possible to use SQL injections in literally every place but you can log into app if you use a login which doesn't exist and an empty password. The team which created this app is not working on it any more and there is an outsourced team which suggested that there are some problems but was never willing to deal with the elephant in the room partially because they've got a very comfortable contract and partially due to lack of skills (just my opinion). My job was supposed to be fixing some performance problems and extending existing functionality but first thing I was asked to do was a review of the existing code base. I've made my review and it was quite a shock for the management but my conclusions were after some time finally confirmed by other programmers. Management made it clear that it is not possible to start rewriting this app from scratch (which in my opinion should be done). We have to maintain its operable state and at the same time fix performance errors and extend the functionality. My question is, as I don't want just to patch the existing code, how to transform this into properly written app while keeping the existing code working at the same time? My plan is: Unify four existing versions into common code base (fixing only most obvious errors). Redesign db and use triggers to populate it with data (so data will be maintained in two formats at the same time) All new functionality will be written as separate project. Step by step transfer existing functionality into the new project After some time everything will be in the new project Some explanation about #2, right now it is practically impossible to make any updates in existing db any change requires reviewing whole code and making changes in many places. Is such plan feasible at all? Another solution is to walk away and leave the headache to someone else.

    Read the article

  • nfs server on cygwin slow

    - by Weltenwanderer
    The setup: We run an instance of cygwin nfsd on a Windows 2008 Server (Xeon 3,2 GHz). There are several Sun Solaris and SunOS machines accessing the shares. This is the exports file: /disk3 (rw,all_squash) /disk2 (rw,all_squash) Those paths are soft linked to the relevant cygdrive/d/path/to/dir paths. Some of the folders contain up to 10k files. The Problem: ls -la on the mounted folder on the sun boxes takes 2 - 3 minutes and the general read performance is really bad. cat filename displays the file in slow bursts and this hurts performance on tasks that access those shared files heavily. Processor load is not the issue, the nfs server idles most of the time, the cygwin tasks never get over 1% load.

    Read the article

< Previous Page | 328 329 330 331 332 333 334 335 336 337 338 339  | Next Page >