Search Results

Search found 8132 results on 326 pages for 'generated'.

Page 139/326 | < Previous Page | 135 136 137 138 139 140 141 142 143 144 145 146  | Next Page >

  • 2D/Isometric map algorithm

    - by Icarus Cocksson
    First of all, I don't have much experience on game development but I do have experience on development. I do know how to make a map, but I don't know if my solution is a normal or a hacky solution. I don't want to waste my time coding things, and realise they're utterly crap and lose my motivation. Let's imagine the following map. (2D - top view - A square) X: 0 to 500 Y: 0 to 500 My character currently stands at X:250,Y:400, somewhere near center of 100px above bottom and I can control him with my keyboard buttons. LEFT button does X--, UP button does Y-- etc. This one is kid's play. I'm asking this because I know there are some engines that automate this task. For example, games like Diablo 3 uses an engine. You can pretty much drag drop a rock to map, and it is automatically being placed here - making player unable to pass through/detect the collision. But what the engine exactly does in the background? Generates a map like mine, places a rock at the center, and checks it like: unmovableObjects = array('50,50'); //we placed a rock at 50,50 location if(Map.hasUnmovableObject(CurrentPlayerX, CurrentPlayerY)) { //unable to move } else { //able to move } My question is: Is this how 2D/Isometric maps are being generated or there is a different and more complex logic behind them?

    Read the article

  • Social IT guy barrier [closed]

    - by sergiol
    Possible Duplicate: How do you deal with people who ask you to fix their computer? Hello. Almost every person that deserves the title of being a programmer as faced the problem of persons that do not even remember the mere existence of those professionals, unless they have serious problems in their computer or some other IT related problem. May be my post will be considered off-topic, but I think it is a very important question. As Joel Spolsky says, IT guys are not Asperger geeks, and they need social life like everybody. But the people that is always asking for favors from us, can ruin deeply our social and personal life. I could experience this by myself. This fact as generated articles like http://www.lifereboot.com/2007/10-reasons-it-doesnt-pay-to-be-the-computer-guy/ and http://ecraazul.wordpress.com/2009/01/29/o-gajo-da-informatica-de-a-a-z/ (I received this one in my mailbox. It is in Portuguese, but I believe it is translated from English). Basically the idea is to criticize people that is always asking us favors. It is even more annoying if you are person very specialized in some subject and a person asks you a completely out-of-that-context question. For example, you are a VBA programmer and somebody says you to that his/her Mobile Internet Pen stopped to work five days ago and needs your help to put it working again. When you go to a doctor to fix your legs, you don't go to an ophthalmologist. You go to an orthopedist. And you pay. I don't how it works in other countries, but in Portugal being a doctor is so an overvalued job, that they earn very much money and almost nobody asks them free favors. So, my question is: what kind of social barrier (or whatever else) do you use to protect yourself from that situation?

    Read the article

  • Better way to generate enemies of different sub-classes

    - by KDiTraglia
    So lets pretend I have an enemy class that has some generic implementation and inheriting from it I have all the specific enemies of my game. There are points in my code that I need to check whether an enemy is a specific type, but in Java I have found no easier way than this monstrosity... //Must be a better way to do this if ( enemy.class.isAssignableFrom(Ninja.class) ) { ... } My partner on the project saw these and changed them to use an enum system instead public class Ninja extends Enemy { //EnemyType is an enum containing all our enemy types public EnemyType = EnemyTypes.NINJA; } if (enemy.EnemyType = EnemyTypes.NINJA) { ... } I also have found no way to generate enemies on varying probabilities besides this for (EnemyTypes types : enemyTypes) { if ( (randomNext = (randomNext - types.getFrequency())) < 0 ) { enemy = createEnemy(types.getEnemyType()); break; } } private static Enemy createEnemy(EnemyType type) { switch (type) { case NINJA: return new Ninja(new Vector2D(rand.nextInt(getScreenWidth()), 0), determineSpeed()); case GORILLA: return new Gorilla(new Vector2D(rand.nextInt(getScreenWidth()), 0), determineSpeed()); case TREX: return new TRex(new Vector2D(rand.nextInt(getScreenWidth()), 0), determineSpeed()); //etc } return null } I know java is a little weak at dynamic object creation, but is there a better way to implement this in a way such like this for (EnemyTypes types : enemyTypes) { if ( (randomNext = (randomNext - types.getFrequency())) < 0 ) { //Change enemyTypes to hold the classes of the enemies I can spawn enemy = types.getEnemyType().class.newInstance() break; } } Is the above possible? How would I declare enemyTypes to hold the classes if so? Everything I have tried so far as generated compile errors and general frustration, but I figured I might ask here before I completely give up to the huge mass that is the createEveryEnemy() method. All the enemies do inherit from the Enemy class (which is what the enemy variable is declared as). Also is there a better way to check which type a particular enemy that is shorter than enemy.class.isAssignableFrom(Ninja.class)? I'd like to ditch the enums entirely if possible, since they seem repetitive when the class name itself holds that information.

    Read the article

  • Doubling the DPI with a shader?

    - by Mathias Lykkegaard Lorenzen
    I'm developing a game where the map is generated with Perlin Noise, but on the CPU. I am generating some perlin noise onto a texture with a small size, and then I stretch it out to the whole screen to simulate a map. The reason for the CPU generating the noise is that I want it to look the same on all devices. Now, here's the end-result. Please ignore the bullets and the explosion on the picture. What matters is the background (the black/gray pixels) and the ground (the brown-ish pixels). They are rendered to the same texture through perlin noise. However, this doesn't look very pretty. So I was wondering if it would be possible to double the amount of pixels using a shader, and rounding edges at the same time? In other words, improve the DPI. I'm using SharpDX with DirectX 11, through its toolkit feature. But any help that'll lead me in the right direction (for instance through HLSL) would be a great help. Thanks in advance.

    Read the article

  • Why maven so slow compared to automake?

    - by ???'Lenik
    I have a Maven project consists of around 100 modules. I have reason to decompose the project to so many modules, and I don't think I should merge them in order to speed up the build process. I have read a lot of projects by other people, e.g., the Maven project itself, and Apache Archiva, and Hudson project, they all consists of a lot of modules, nearly 100 maybe, more or less. The problem is, to build them all need so much time, 3 hours for the first time build (this is acceptable because a lot of artifacts to download), and 15 minutes for the second build (this is not acceptable). For automake, things are similarly, the first time you need to configure the project, to prepare the magical config.h file, it's far more complex then what maven does. But it's still fast, maybe 10 seconds on my Debian box. After then, make install requires maybe 10 minutes for the first time build. However, when everything get prepared, the .o object files are generated, they don't have to be rebuild at all for the second time build. (In Maven, everything rebuild at everytime.) I'm very wondering, how guys working for Maven projects can bare this long time for each build, I'm just can't sit down calmly during each time Maven build, it took too long time, really.

    Read the article

  • importing animations in Blender, weird rotations/locations

    - by user975135
    This is for the Blender 2.6 API. There are two problems: 1. When I import a single animation frame from my animation file to Blender, all bones look fine. But when I import multiple (all of the frames), just the first one looks right, seems like newer frames are affected by older ones, so you get slightly off positions/rotations. This is true when both assigning PoseBone.matrix and PoseBone.matrix_basis. bone_index = 0 # for each frame: for frame_index in range(frame_count): # for each pose bone: add a key for bone_name in bone_names: # "bone_names" - a list of bone names I got earlier pose.bones[bone_name].matrix = animation_matrices[frame_index][bone_index] # "animation_matrices" - a nested list of matrices generated from reading a file # create the 'keys' for the Action from the poses pose.bones[bone_name].keyframe_insert('location', frame = frame_index+1) pose.bones[bone_name].keyframe_insert('rotation_euler', frame = frame_index+1) pose.bones[bone_name].keyframe_insert('scale', frame = frame_index+1) bone_index += 1 bone_index = 0 Again, it seems like previous frames are affecting latter ones, because if I import a single frame from the middle of the animation, it looks fine. 2. I can't assign armature-space animation matrices read from a file to a skeleton with hierarchy (parenting). In Blender 2.4 you could just assign them to PoseBone.poseMatrix and bones would deform perfectly whether the bones had a hierarchy or none at all. In Blender 2.6, there's PoseBone.matrix_basis and PoseBone.matrix. While matrix_basis is relative to parent bone, matrix isn't, the API says it's in object space. So it should have worked, but doesn't. So I guess we need to calculate a local space matrix from our armature-space animation matrices from the files. So I tried multiplying it ( PoseBone.matrix ) with PoseBone.parent.matrix.inverted() in both possible orders with no luck, still weird deformations.

    Read the article

  • Tweaking log4net Settings Programmatically

    - by PSteele
    A few months ago, I had to dynamically add a log4net appender at runtime.  Now I find myself in another log4net situation.  I need to modify the configuration of my appenders at runtime. My client requires all files generated by our applications to be saved to a specific location.  This location is determined at runtime.  Therefore, I want my FileAppenders to log their data to this specific location – but I won't know the location until runtime so I can't add it to the XML configuration file I'm using. No problem.  Bing is my new friend and returned a couple of hits.  I made a few tweaks to their LINQ queries and created a generic extension method for ILoggerRepository (just a hunch that I might want this functionality somewhere else in the future – sorry YAGNI fans): public static void ModifyAppenders<T>(this ILoggerRepository repository, Action<T> modify) where T:log4net.Appender.AppenderSkeleton { var appenders = from appender in log4net.LogManager.GetRepository().GetAppenders() where appender is T select appender as T;   foreach (var appender in appenders) { modify(appender); appender.ActivateOptions(); } } Now I can easily add the proper directory prefix to all of my FileAppenders at runtime: log4net.LogManager.GetRepository().ModifyAppenders<FileAppender>(a => { a.File = Path.Combine(settings.ConfigDirectory, Path.GetFileName(a.File)); }); Thanks beefycode and Wil Peck. Technorati Tags: .NET,log4net,LINQ

    Read the article

  • Speed up loading of test results from builds in Visual Studio

    - by Jakob Ehn
    I still see people complaining about the long time it takes to load test results from a TFS build in Visual Studio. And they make a valid point, it does take a very long time to load the test results, even for a small number of tests. The reason for this is that the test results is not just the result of the test run but also all the binaries that were part of the test run. This often also means that the debug symbols (*.pdb) will be downloaded to your local machine. This reason for this behaviour is that it letsyou re-run the tests locally. However, most of the times this is not what the developer will do, they just want to know which tests failed and why. They can then fix the tests and rerun them locally. It turns out there is a way to load only the test results, which is much faster. The only tricky bit is to find the location of the .trx file that is generated during the build. Particularly in TFS 2010 where you often have multiple build agents, which of corse results in different paths to the trx file. Note: To use this you must have read permission to the build folder on the build agent where the build was executed. Open the build result for the build Click View Log Locate the part where MSTest is invoked. When using test containers, it looks like this:   Note: You can actually search in the log window, press Ctrl+F and you will get a little search box at the bottom. Nice! On the MSTest command line call, locate the /resultsfileroot parameter, which points to the folder where the test results are stored Note that this path is local for the build server, so you need to replace the drive letter with the server name: D:\Builds\Project\TestResults to \Project\TestResults">\\<BuildServer>\Project\TestResults Double-click on the .trx file and you will notice that it loads much faster compared to opening it from the build log window

    Read the article

  • Internet Explorer will not open Office files

    - by geekrutherford
    An issue was brought to my attention today at work where certain users were unable to open Office files (specifically Excel) from Internet Explorer 7.   The user would click on a button which simply generated an inline JS call to open a pop-up pointing to the .xlsx file on the server. IE would open the pop-up and then shortly thereafter the pop-up would disappear without the file ever opening.   I tweaked the security settings in the users browser...added the site to the list of trusted sites and lowered the security settings to Medium-Low. This allowed IE to at least prompt with the Save or Open message. Clicking either open resulted in "Internet Explorer Could Not Open the Site...".   Perturbed, I retreated back to Geek Central (aka my desk) and modified my application such that instead of simply pointing the browser to the file and now used Response.TransmitFile() to stream it to the browser instead. I thought to myself "this is perfect, it has to work!!!". Alas, no luck.   Bewildered and confused and returned to the lone users computer and started looking around the various IE options. I stumbled upon "Clear SSL State" under the "Content" tab. This appears to clear out all SSL certificates on the client forcing it to refresh. Doing this in concert with resetting the security levels for all zones back to their defaults seemed to do the trick.

    Read the article

  • Static "LoD" hack opinions

    - by David Lively
    I've been playing with implementing dynamic level of detail for rendering a very large mesh in XNA. It occurred to me that (duh) the whole point of this is to generate small triangles close to the camera, and larger ones far away. Given that, rather than constantly modifying or swapping index buffers based on a feature's rendered size or distance from the camera, it would be a lot easier (and potentially quite a bit faster), to render a single "fan" or flat wedge/frustum-shaped planar mesh that is tessellated into small triangles close to the near or small end of the frustum and larger ones at the far end, sort of like this (overhead view) (Pardon the gap in the middle - I drew one side and mirrored it) The triangle sizes are chosen so that all are approximately the same size when projected. Then, that mesh would be transformed to track the camera so that the Z axis (center vertical in this image) is always aligned with the view direction projected into the XZ plane. The vertex shader would then read terrain heights from a height texture and adjust the Y coordinate of the mesh to match a height field that defines the terrain. This eliminates the need for culling (since the mesh is generated to match the viewport dimensions) and the need to modify the index and/or vertex buffers when drawing the terrain. Obviously this doesn't address terrain with overhangs, etc, but that could be handled to a certain extent by including a second mesh that defines a sort of "ceiling" via a different texture. The other LoD schemes I've seen aren't particularly difficult to implement and, in some cases, are a lot more flexible, but this seemed like a decent quick-and-dirty way to handle height map-based terrain without getting into geometry manipulation. Has anyone tried this? Opinions?

    Read the article

  • Cron: job starts but doesn't complete

    - by Guandalino
    I have a problem with a cron job which starts but doesn't complete. Running the command manually works fine. I already read the page about cron issues and solutions here on AskUbuntu, tried the proposed solutions but didn't find an answer working in my case. I'm using Ubuntu 12.04. $ crontab -e SHELL=/bin/bash # otherwise it would be /bin/sh 59 16 * * * /bin/duply calendar backup > /tmp/duply.log Btw, the cron file ends with an empty line, as someone pointed out. Once the job has "finished"...: $ cat /tmp/duply.log Start duply v1.5.7, time is 2012-06-22 16:59:01. Instead, running manually the script it works correctly and gives this output: Start duply v1.5.7, time is 2012-06-22 17:06:39. [cut] ... here is a long output generated by duply. ... and yes, files have been backed up. [cut] --- Finished state OK at 17:06:42.581 - Runtime 00:00:03.170 --- I also tried to restart the cron daemon (sudo service cron restart) but nothing changed. Do you have any suggestion to fix the issue?

    Read the article

  • SQLRally Nordic gets underway

    - by Rob Farley
    PASS is becoming more international, which is great. The SQL Community has always been international – it’s not as if data is only generated in North America. And while it’s easy for organisations to have a North American focus, PASS is taking steps to become international. Regular readers will be aware that I’m one of three advisors to the PASS Board of Directors, with a focus on developing PASS as a more global organisation. With this in mind, it’s great that today is Day 1 of SQLRally Nordic, being hosted in in Sweden – not only a non-American country, but one that doesn’t have English as its major language. The event has been hosted by the amazing Johan Åhlén and Raoul Illyés, two guys who I met earlier this year, but the thing that amazes me is the incredible support that this event has from the SQL Community. It’s been sold out for a long time, and when you see the list of speakers, it’s not surprising. Some of the industry’s biggest names from Microsoft have turned up, including Mark Souza (who is also a PASS Director), Thomas Kejser and Tobias Thernström. Business Intelligence experts such as Jen Stirrup, Chris Webb, Peter Myers, Marco Russo and Alberto Ferrari are there, as are some of the most awarded SQL MVPs such as Itzik Ben-Gan, Aaron Bertrand and Kevin Kline. The sponsor list is also brilliant, with names such as HP, FusionIO, SQL Sentry, Quest and SolidQ complimented by Swedish companies like Cornerstone, Informator, B3IT and Addskills. As someone who is interested in PASS becoming global, I’m really excited to see this event happening, and I hope it’s a launch-pad into many other international events hosted by the SQL community. If you have the opportunity, thank Johan and Raoul for putting this event on, and the speakers and sponsors for helping support it. The noise from Twitter is that everything is going fantastically well, and everyone involved should be thoroughly congratulated! @rob_farley

    Read the article

  • New partnership allows auto-transposition of client/server application to Windows Azure

    - by Webgui
    The economics of IT is changing rapidly, and organizations are searching to widen and secure availability of their systems and at the same time lower costs which is exactly what the cloud meant to do. Running your systems on Microsoft’s Windows Azure cloud for example would improve and secure the availability, accessibility and scalability (both up and down) of your systems and support the new IT economics. However, in order to take advantage of the cloud's promise of lower cost of ownership, the applications must be built or adjusted to work on that platform and in most cases this is not a simple task.  Even existing web applications cannot always be transferred to Azure without some changes, and for client/server applications, the task is way more challenging even to the point where it seems impossible. The reason is the gaps between the client/server desktop technology and the cloud's. For that reason, most of the known methodologies to migrate existing client/server applications actually involve rewrite of the desktop systems for the cloud. A unique approach is introduced by Visual WebGui which creates a virtualization layer atop ASP.Net web server, it moves the transformed or generated .Net code to that layer, and then using a patent pending protocol it renders a user interface within a plain browser. The end result is pure .NET code that is a base code for a pure rich web application and now due to a collaboration with Microsoft Windows Azure Visual WebGui provides the shortest path from client/server to the Azure cloud by being able to handle close to 95% of the transformation to the cloud platform in an automatic way. Application Migration to Azure without migraines More information about the Instant CloudMove Azure solution here.

    Read the article

  • How to reduce the fan noise and how to increase battery life?

    - by mehdi
    I have a brand new Sony Vaio S series laptop.(VPCSA2DGX) It came factory installed with Windows 7 professional Edition 64bit. Runs Intel core i5, 500 GB HDD , 4GB Ram. First I installed ubuntu 11.10 64 bit along side Windows to dual boot. Later,since the problem did not solve, I installed ubuntu 12.04 64bit along side Windows to dual boot. However the problem keeps annoying me. Problem: When running ubuntu 11.10/12.04, the battery lasts only about 1.5 hours. The Fan runs loud and continuously. And there is a lot of heat generated. System monitor shows less than 5% CPU used. My laptop enjoys hybrid graphic and I tried turning off ADM graphic card and keep Intel graphic card on. However I can not get the Fan noise or heat to go away and consequently the battery drain continues. BTW, in windows, the laptop gives 4-5 hours of battery power, Fan is silent and there is no heat problem. Any ideas on how to reduce the fan noise and how to increase battery life in ubuntu 11.10/12.04?

    Read the article

  • Issue tracking multiple domains with Google Analytics

    - by user359650
    I have 2 domains mydomain.com and mydomain.net which I'm trying to track with the same GA code. Here are the options I turned on: Subdomains of mydomain ON Examples: www.mydomain.com -and- apps.mydomain.com -and- store.mydomain.com Multiple top-level domains of mydomain ON Examples: mydomain.uk -and- mydomain.cn -and- mydomain.fr Which gave me the following code: _gaq.push(['_setAccount', 'UA-123456789-1']); _gaq.push(['_setDomainName', 'mydomain.com']); _gaq.push(['_setAllowLinker', true]); _gaq.push(['_trackPageview']); In this help page I read that _setDomainName must be changed for each domain which I did: -if you go to mydomain.net you get _gaq.push(['_setDomainName', 'mydomain.net']); -if you go to mydomain.com you get _gaq.push(['_setDomainName', 'mydomain.com']); When I generate traffic on both mydomain.dom and mydomain.net and watches GA push requests made with firebug I can see requests generated for both domains and the parameter called utmhn has the proper domain value (which matches that of _setDomainName and the browser address bar). However when I monitor the realtime statistics under Home->Real-Time->Overview I see pageviews for mydomain.net BUT NOT for mydomain.dom :( What am I missing to properly track both domains? PS: in the help page I mentioned they talk about setting up cross links which I didn't do for now as my understanding is that it shouldn't be needed to get what I'm trying to do to work. Also I want to mention that I do not have any tracking code for any of these 2 domains other than the one I mentioned.

    Read the article

  • Character creation using spritesheets

    - by Patrick Developer
    I am currently creating a 2D fighting game and have implemented a system where upon starting a new game, the player is presented with the option to create a custom character. I have a set of string arrays set with values that correspond to hair, headgear, chest, lower body and shoes. When done selecting a variety of items from the lists, a code is generated based off the index of each item (i.e 01123), which is then used to assign the correct Spritesheet to the player character. This has already presented a lot of work as I have had to create quite a few spreadsheets based of possible combinations, but I am now looking at a massive amount of work to implement each variation. I have started to look into setting layers for each item to reduce workload, but I am also looking at having different stances for the character - Depending on the currently equipped weapon - so this may present a lot of work either way. My question is, do I have any alternatives or am I stuck creating masses of Spritesheets to cover all combinations? As a side note, how much impact will assigning layered items have on overall performance?

    Read the article

  • Why kernel source is not installed

    - by Subhajit
    I want to install kernel source in Ubuntu 12.04 which is not installed. I have checked the same using the following command : dpkg -s kernel Output is Kernel is not installed no information available Hence I have followed the following steps to install the same: 1.Installed the Dependencies by the following command: sudo apt-get install gcc libncurses5-dev git-core kernel-package fakeroot build-essential sudo apt-get update && sudo apt-get upgrade Download kernel source by the following command: wget http://www.kernel.org/pub/linux/kernel/v3.x/linux-3.5.tar.bz2 tar -xvf linux-3.5.tar.bz2 cd linux-3.5/ 3.Compiled the source code to generate the .deb packages by the following command make-kpkg clean fakeroot make-kpkg --initrd --append-to-version=-spica kernel_image kernel_headers Install the .deb packages (two .deb packages generated, one is to install the kernel headers, another one is to install the kernel image) by the following command sudo dpkg -i linux-*.deb But after reboot it seems kernel is not installed (checked by dpkg -s kernel). Please tell where I am going wrong. Also, In step 3, I guess I am installing a new kernel (named spica kernel_image) but during the boot up this new kernel is not showing as option. Please help me

    Read the article

  • An adequate message authentication code for REST

    - by Andras Zoltan
    My REST service currently uses SCRAM authentication to issue tokens for callers and users. We have the ability to revoke caller privileges and ban IPs, as well as impose quotas to any type of request. One thing that I haven't implemented, however, is MAC for requests. As I've thought about it more, for some requests I think this is needed, because otherwise tokens can be stolen and before we identify this and deactivate the associated caller account, some damage could be done to our user accounts. In many systems the MAC is generated from the body or query string of the request, however this is difficult to implement as I'm using the ASP.Net Web API and don't want to read the body twice. Equally importantly I want to keep it simple for callers to access the service. So what I'm thinking is to have a MAC calculated on: the url, possibly minus query string the verb the request ip (potentially is a barrier on some mobile devices though) utc date and time when the client issues the request. For the last one I would have the client send that string in a request header, of course - and I can use it to decide whether the request is 'fresh' enough. My thinking is that whilst this doesn't prevent message body tampering it does prevent using a model request to use as a template for different requests later on by a malicious third party. I believe only the most aggressive man in the middle attack would be able to subvert this, and I don't think our services offer any information or ability that is valuable enough to warrant that. The services will use SSL as well, for sensitive stuff. And if I do this, then I'll be using HMAC-SHA-256 and issuing private keys for HMAC appropriately. Does this sound enough? Have I missed anything? I don't think I'm a beginner when it comes to security, but when working on it I always. am shrouded in doubt, so I appreciate having this community to call upon!

    Read the article

  • What can I do about Hack Attempts

    - by Matt
    I have an ASP.net website hosted using the Ultidev Web Server Pro. Every day I get a steady stream of errors generated by my application where page requests were requested and denied. This is obviously someone/something trying to find any exploits on my website. Here is an example log: 28/08/2012 11:37:11 - File not Found:http://MyWebServer/phpmyadmin/index.php 28/08/2012 11:37:11 - File not Found:http://MyWebServer/phpMyAdmin/index.php 28/08/2012 11:37:12 - File not Found:http://MyWebServer/phpMyAdmin-2/index.php 28/08/2012 11:37:12 - File not Found:http://MyWebServer/php-my-admin/index.php 28/08/2012 11:37:13 - File not Found:http://MyWebServer/phpMyAdmin-2.2.3/index.php 28/08/2012 11:37:13 - File not Found:http://MyWebServer/phpMyAdmin-2.2.6/index.php 28/08/2012 11:37:14 - File not Found:http://MyWebServer/phpMyAdmin-2.5.1/index.php 28/08/2012 11:37:14 - File not Found:http://MyWebServer/phpMyAdmin-2.5.4/index.php 28/08/2012 11:37:15 - File not Found:http://MyWebServer/phpMyAdmin-2.5.5-rc1/index.php 28/08/2012 11:37:15 - File not Found:http://MyWebServer/phpMyAdmin-2.5.5-rc2/index.php 28/08/2012 11:37:15 - File not Found:http://MyWebServer/phpMyAdmin-2.5.5/index.php 28/08/2012 11:37:16 - File not Found:http://MyWebServer/phpMyAdmin-2.5.5-pl1/index.php 28/08/2012 11:37:16 - File not Found:http://MyWebServer/phpMyAdmin-2.5.6-rc1/index.php 28/08/2012 11:37:17 - File not Found:http://MyWebServer/phpMyAdmin-2.5.6-rc2/index.php 28/08/2012 11:37:18 - File not Found:http://MyWebServer/phpMyAdmin-2.5.6/index.php 28/08/2012 11:37:18 - File not Found:http://MyWebServer/phpMyAdmin-2.5.7/index.php 28/08/2012 11:37:19 - File not Found:http://MyWebServer/phpMyAdmin-2.5.7-pl1/index.php 28/08/2012 13:52:07 - File not Found:http://MyWebServer/admin/pma/translators.html Is this normal? Is there anything I can do to protect myself against this?

    Read the article

  • Preventing Users From Copying Text From and Pasting It Into TextBoxes

    Many websites that support user accounts require users to enter an email address as part of the registration process. This email address is then used as the primary communication channel with the user. For instance, if the user forgets her password a new one can be generated and emailed to the address on file. But what if, when registering, a user enters an incorrect email address? Perhaps the user meant to enter [email protected], but accidentally transposed the first two letters, entering [email protected]. How can such typos be prevented? The only foolproof way to ensure that the user's entered email address is valid is to send them a validation email upon registering that includes a link that, when visited, activates their account. (This technique is discussed in detail in Examining ASP.NET's Membership, Roles, and Profile - Part 11.) The downside to using a validation email is that it adds one more step to the registration process, which will cause some people to bail out on the registration process. A simpler approach to lessening email entry errors is to have the user enter their email address twice, just like how most registration forms prompt users to enter their password twice. In fact, you may have seen registration pages that do just this. However, when I encounter such a registration page I usually avoid entering the email address twice, but instead enter it once and then copy and paste it from the first textbox into the second. This behavior circumvents the purpose of the two textboxes - any typo entered into the first textbox will be copied into the second. Using a bit of JavaScript it is possible to prevent most users from copying text from one textbox and pasting it into another, thereby requiring the user to type their email address into both textboxes. This article shows how to disable cut and paste between textboxes on a web page using the free jQuery library. Read on to learn more! Read More >

    Read the article

  • Compiled code spreadsheet-like cell management? (auto-updating)

    - by proGrammar
    Okay so I am fully aware how spreadsheets manage cells, they build dependency graphs where when one cell changes it tells all the other cells that are dependent on it that it changed. So they can from there update. How they update I think involves either re-evaluating the formulas stored as strings, or re-evaluating the abstract syntax tree which I think is stored differently and might be faster. Something like that. What I'm looking to do is manage a few variables in my code so I don't have to update them in the correct way, which would be a nightmare. But I also want it much faster than spreadsheets. And since I'm not looking for any functionality as great as are in these spreadsheets, I just figured from that thought point that there has to be a way to have a very fast implementation of this functionality. Especially since I don't have to modify cells after compiling unless that would be an option. I'm very new to programming so I have no idea. One example might be to have a code-generator that generates code that does this for me. But I have no clue what the generated code would look like. Specifically, how exactly would variables inform others that they need to update, and what do those variables do to update? I'm looking for any kind of ideas. Programming is not my job but nonetheless I was hoping to have some kind of system like this that would greatly help me with some stuff. Of course I have been programming plenty lately so I can still program. I just don't have the full scope on things. I'm looking for any kind of ideas, thank you very much in advance! Also, please help me with the tags. I know C# and Java mainly and I'm hoping to implement this in either of those languages and I'm hoping this can stay in those tags. Forcing this into some kind of spreadsheet tag wouldn't be accurate.

    Read the article

  • Why doesn't my grub background show?

    - by luri
    I've tried to change resolution, colors and background image for my grub menu, but I get no background (well, just a black one, no image).... What am I doing wrong? This is my grub.cfg (omitting the OS's part): # # DO NOT EDIT THIS FILE # # It is automatically generated by grub-mkconfig using templates # from /etc/grub.d and settings from /etc/default/grub # ### BEGIN /etc/grub.d/00_header ### if [ -s $prefix/grubenv ]; then set have_grubenv=true load_env fi set default="${saved_entry}" if [ "${prev_saved_entry}" ]; then set saved_entry="${prev_saved_entry}" save_env saved_entry set prev_saved_entry= save_env prev_saved_entry set boot_once=true fi function savedefault { if [ -z "${boot_once}" ]; then saved_entry="${chosen}" save_env saved_entry fi } function recordfail { set recordfail=1 if [ -n "${have_grubenv}" ]; then if [ -z "${boot_once}" ]; then save_env recordfail; fi; fi } function load_video { insmod vbe insmod vga } insmod part_msdos insmod ext2 set root='(hd1,msdos5)' search --no-floppy --fs-uuid --set 42509bf9-f3e6-460a-8947-ec0f5c1fbcc8 if loadfont /usr/share/grub/unicode.pf2 ; then set gfxmode=1280x1024x24 load_video insmod gfxterm fi terminal_output gfxterm insmod part_msdos insmod ext2 set root='(hd1,msdos5)' search --no-floppy --fs-uuid --set 42509bf9-f3e6-460a-8947-ec0f5c1fbcc8 set locale_dir=($root)/boot/grub/locale set lang=es insmod gettext if [ "${recordfail}" = 1 ]; then set timeout=-1 else set timeout=10 fi ### END /etc/grub.d/00_header ### ### BEGIN /etc/grub.d/05_debian_theme ### insmod part_msdos insmod ext2 set root='(hd1,msdos5)' search --no-floppy --fs-uuid --set 42509bf9-f3e6-460a-8947-ec0f5c1fbcc8 insmod jpeg if background_image /boot/grub/Serenity_Enchanted_by_sirpecangum.jpg ; then set color_normal=black/white set color_highlight=brown/light-gray else set menu_color_normal=white/black set menu_color_highlight=black/light-gray fi ### END /etc/grub.d/05_debian_theme ### The selected image has been copied to /boot/grub/Serenity_Enchanted_by_sirpecangum.jpg with no luck. I'm for sure missing something (probably something obvious) but I don't really get it...

    Read the article

  • A Simple Solution For NetBeans RCP Apps That Need A Groovy Editor

    - by Geertjan
    Take a look at Nils Hoffmann's metabolomic analyzer, especially at the Groovy editor contained within it: Obviously, it would be cool if the Groovy editor in the app above were to have syntax coloring and other editor features helpful in coding Groovy. However, as I showed in If You Include the Groovy Editor, there are multiple dependencies that the NetBeans Groovy support has on other modules that would be completely superfluous in the above application, while they'd make the app much heavier than it is, simply because of all the Groovy dependencies. But today I thought of a simple solution. Why not take the Groovy.g file (i.e., the ANTLR definition), such as this one [though that's probably not the most up to date one, wondering how to find the most up to date one] and then apply the content of this screencast (made by me) to the Groovy.g file: Within a few minutes, you should end up with Groovy syntax coloring. OK, so that's not a full blown Groovy editor, but syntax coloring is surely a cool thing to have in the app with which this blog entry started? Sure, this means creating a new Groovy editor from scratch. But the point is that doing so can be very simple, i.e., the syntax coloring can simply be generated via the simple instructions above. I'm going to try it myself in the next few days, but would be cool if others out there would try this too!

    Read the article

  • Parallelize incremental processing in Tabular #ssas #tabular

    - by Marco Russo (SQLBI)
    I recently came in a problem trying to improve the parallelism of Tabular processing. As you know, multiple tables can be processed in parallel, whereas the processing of several partitions within the same table cannot be parallelized. When you perform an incremental update by adding only new rows to existing table, what you really do is adding rows to a partition, so adding rows to many tables means adding rows to several partitions. The particular condition you have in this case is that every partition in which you add rows belongs to a different table. Adding rows implies using the ProcessAdd command; its QueryBinding parameter specifies a SQL syntax to read new rows, otherwise the original query specified for the partition will be used, and it could generate duplicated data if you don’t have a dynamic behavior on the SQL side. If you create the required XMLA code manually, you will find that the QueryBinding node that should be part of the ProcessAdd command has to be moved out from ProcessAdd in case you are using a Batch command with more than one Process command (which is the reason why you want to use a single batch: run multiple process operations in parallel!). If you use AMO (Analysis Management Objects) you will find that this combination is not supported, even if you don’t have a syntax error compiling the code, but you might obtain this error at execution time: The syntax for the 'Process' command is incorrect. The 'Bindings' keyword cannot appear under a 'Process' command if the 'Process' command is a part of a 'Batch' command and there are more than one 'Process' commands in the 'Batch' or the 'Batch' command contains any out of line related information. In this case, the 'Bindings' keyword should be a part of the 'Batch' command only. If this is happening to you, the best solution I’ve found is manipulating the XMLA code generated by AMO moving the Binding nodes in the right place. A more detailed description of the issue and the code required to send a correct XMLA batch to Analysis Services is available in my article Parallelize ProcessAdd with AMO. By the way, the same technique (and code) can be used also if you have the same problem in a Multidimensional model.

    Read the article

  • How is"cloud computing"different from "client-server"?

    - by BellevueBob
    Watching a CEO for a new "cloud computing" company describe his company on a finance TV program today, he said something like "Cloud computing is superior to old-fashioned client-server computing". Now I'm confused. Can someone please explain what "cloud computing" means in contrast to client-server? As far as I understand it, cloud computing is more of a network services model, such that I do not own or maintain the physical hardware. The "cloud" is all the back-end stuff. But I still might have an application that communicates with that "cloud" environment. And if I run a web site presents a form that a user fills out, pushes a button on the page, and returns some report that was generated by the web server, isn't that the same as "cloud" computing? And would you not consider my web browser as the "client"? Please note my question is specific to the concept of "cloud computing" with respect to "client-server". Sorry if this is an inappropriate question for this site; it's the one closest in the Stack universe and this is my first time here. I'm an old timer, programming since mainframe days in the late 70's.

    Read the article

< Previous Page | 135 136 137 138 139 140 141 142 143 144 145 146  | Next Page >