Search Results

Search found 89090 results on 3564 pages for 'optimize code'.

Page 633/3564 | < Previous Page | 629 630 631 632 633 634 635 636 637 638 639 640  | Next Page >

  • Full Portfolio of x86 Systems On Display at Oracle OpenWorld

    - by kgee
    This OpenWorld, Oracle’s x86 hardware team will have two hardware demos, showcasing the new X3 systems, as well as several other x86 solutions such as the ZFS Storage Appliance, Oracle Database Appliance and the Carrier Grade NETRA systems. These two demos are located in the South Hall in Oracle’s booth 1133 and Intel’s booth 1101.  The Intel booth will feature additional demos including 3D demos of each server, a static architectural demo, the Oracle x86 Grand Prix video game and the Intel Theatre featuring several presentations by Intel’s partners. Oracle’s Intel Theatre Schedule and Topics Include:Monday 1. 10:30 a.m. - Engineered to Work Together: Oracle x86 Systems in the Data Center2. 12:30 a.m. - The Oracle NoSQL Database on the Intel Platform.3. 1:30 p.m. - Accelerate Your Path to Cloud with Oracle VM4. 3:30 p.m. - Why Oracle Linux is the Best Linux for Your Intel Based Systems5. 4:30 p.m. - Accelerate Your Path to Cloud with Oracle VMTuesday 1. 10:00 a.m. - Speed of thought” Analytics using In-Memory Analytics2. 1:30 a.m. - A Storage Architecture for Big Data:  "It’s Not JUST Hadoop"3. 2:00 a.m. - Oracle Optimized Solution for Enterprise Cloud Infrastructure.4. 2:30 p.m. - Configuring Storage to Optimize Database Performance and Efficiency.5. 3:30 p.m. - Total Cloud Control for Oracle's x86 SystemsWednesday 1. 10:00 a.m. - Big Data Analysis Using R-Programming Language2. 11:30 a.m. - Extreme Performance Overview, The Oracle Exadata Database Machine3. 1:30 p.m. - Oracle Times Ten In-Memory Database Overview

    Read the article

  • OpenWorld Session: BPM Analytics - Process Dashboards, BAM and Intelligent Optimization

    - by Ajay Khanna
    Blog Contributed by Payal Srivastava, Oracle Product Mamagement   Margin of error for the business is shrinking dramatically. Business needs to accomplish more with less i.e. minimal investment with quick ROI. Learn how you can leverage Oracle BPM suite and complementary technologies to create a robust analytics capability to provide visibility into operations to  C-level executives and Operational managers. We will talk about BPM analytics options available today that will not only enhance the visibility but allow you to intelligently optimize the business process at design time as well as run time.  The session will share some exciting this on our roadmap.  Come meet with the Oracle team members  from Product Development (Avinash Dabholkar , Eric Hsiao) and Product Management (Payal Srivastava) at the session. We would like to hear  your questions/comments about  our offering and roadmap. BPM Analytics: Process Dashboards, BAM and Intelligent Optimization, Moscone South 308, 10/3/12 @11:45am – CON 8598

    Read the article

  • How to hire support people?

    - by Martin
    I manage a tech support team at a mid-sized software company. We are the last line of support, so issues that we can't fix need to be escalated to the development team. When I joined the company, our team wasn't capable of much beyond using a specific set of troubleshooting steps to solve known issues and escalating anything else to the developers. It's always been a goal of mine for our team to shoulder as much of the support burden as possible without ever bothering a developer. Over the past few years, I, along with several new hires I've made, have made pretty good progress in that direction. We've coded our own troubleshooting tools which now ship with several of our products. When users have never-before-seen issues, we analyze stack traces and troubleshoot down to the code level, and if we need to submit a bug, half the time we've already identified in the code where in the code the bug is and offered a patch to fix it. Here's the problem I've always had: finding support people capable of the work I've described above is really difficult. I've hired 3 people in the past 3 years, and I've probably looked at several thousand resumes and conducted several hundred phone screens to do so. I know it's pretty well accepted that hiring good people is tough in the tech industry, but it seems that support is especially difficult -- there are clearly thousands of people walking around calling themselves support analysts, but 99%+ of them seemingly aren't capable of anything beyond reading a script. I'm curious if anyone has experience recruiting the sort of folks I'm talking about, and if you have any suggestions to share. We've tried all sorts of things -- different job titles/descriptions, using headhunters, etc. And while we've managed to hire a few good folks, it's basically taken us a year to find an appropriate candidate for each opening we've had, and I can't help but wonder if there's something we could be doing differently.

    Read the article

  • Should I have seperate business and personal websites?

    - by Thomas Clowes
    I have my business website - I am a web designer and developer, and also buy/sell websites/domain names. As such my website links to 'Our sites' = the websites which we design and run as well as a variety of tools such as a domain whois tool. These are obviously relevant to the business. As an individual, I like to travel and do white water kayaking as a hobby. I also have a degree in economics. I have thus created a blog on my business website where I write about domain names, web design, kayaking, travelling and economics. I've just begun researching SEO and am looking into optimizing my business website. I don't actually directly offer any services to clients at the moment, my main aim is to have a business website which supports my websites. If for example a potential advertise on one of my sites checks out the business website, I want them to think professional, down to earth, quirky. Given this is having my business/personal interests intertwined a problem? For SEO.. on my homepage for example when I'm writing a headline and a paragraph about what we do.. what do I put? and how do I optimize for SEO with keywords and the like? Further to the above, my company sponsors me and a group of accquantances as a kayaking team.. as such my personal interests do sort of overlap (just to add a complexity :))

    Read the article

  • OOP for unit testing : The good, the bad and the ugly

    - by Jeff
    I have recently read Miško Hevery's pdf guide to writing testable code in which its stated that you should limit your classes instanciations in your constructors. I understand that its what you should do because it allow you to easily mock you objects that are send as parameters to your class. But when it comes to writing actual code, i often end up with things like that (exemple is in PHP using Zend Framework but I think it's self explanatory) : class Some_class { private $_data; private $_options; private $_locale; public function __construct($data, $options = null) { $this->_data = $data; if ($options != null) { $this->_options = $options; } $this->_init(); } private function _init() { if(isset($this->_options['locale'])) { $locale = $this->_options['locale']; if ($locale instanceof Zend_Locale) { $this->_locale = $locale; } elseif (Zend_Locale::isLocale($locale)) { $this->_locale = new Zend_Locale($locale); } else { $this->_locale = new Zend_Locale(); } } } } Acording to my understanding of Miško Hevery's guide, i shouldn't instanciate the Zend_Local in my class but push it through the constructor (Which can be done through the options array in my example). I am wondering what would be the best practice to get the most flexibility for unittesing this code and aswell, if I want to move away from Zend Framework. Thanks in advance

    Read the article

  • Function-Local Static Const variable Initialization semantics.

    - by Hassan Syed
    The questions are in bold, for those that cannot be bothered reading a question in depth. This is a followup to this question. It is to do with the initialization semantics of static variables in functions. Static variables should be initialized once, and their internal state might be altered later - as I (currently) do in the linked question. However, the code in question does not require the feature to change the state of the variable later. Let me clarrify my position, since I don't require the string object's internal state to change. The code is for a trait class for meta programming, and as such would would benifit from a const char * const ptr -- thus Ideally a local cost static const variable is needed. My educated guess is that in this case the string in question will be optimally placed in memory by the link-loader, and that the code is more secure and maps to the intended semantics. This leads to the semantics of such a variable "The C++ Programming language Third Edition -- Stroustrup" does not have anything (that I could find) to say about this matter. All that is said is that the variable is initialized once when the flow of control of the thread first reaches the code. This leads me to ponder if the following code would be sensible, and if not what are the intended semantics ?. #include <iostream> const char * const GetString(const char * x_in) { static const char * const x = x_in; return x; } int main() { const char * const temp = GetString("yahoo"); std::cout << temp << std::endl; const char * const temp2 = GetString("yahoo2"); std::cout << temp2 << std::endl; } The following compiles on GCC and prints "yahoo" twice. Which is what I want -- However it might not be standards compliant (which is why I post this question). It might be more elegant to have two functions, "SetString" and "String" where the latter forwards to the first. If it is standards compliant does someone know of a templates implementation in boost (or elsewhere) ?

    Read the article

  • Unity throws SynchronizationLockException while debugging

    - by pjohnson
    I've found Unity to be a great resource for writing unit-testable code, and tests targeting it. Sadly, not all those unit tests work perfectly the first time (TDD notwithstanding), and sometimes it's not even immediately apparent why they're failing. So I use Visual Studio's debugger. I then see SynchronizationLockExceptions thrown by Unity calls, when I never did while running the code without debugging. I hit F5 to continue past these distractions, the line that had the exception appears to have completed normally, and I continue on to what I was trying to debug in the first place.In settings where Unity isn't used extensively, this is just one amongst a handful of annoyances in a tool (Visual Studio) that overall makes my work life much, much easier and more enjoyable. But in larger projects, it can be maddening. Finally it bugged me enough where it was worth researching it.Amongst the first and most helpful Google results was, of course, at Stack Overflow. The first couple answers were extensive but seemed a bit more involved than I could pull off at this stage in the product's lifecycle. A bit more digging showed that the Microsoft team knows about this bug but hasn't prioritized it into any released build yet. SO users jaster and alex-g proposed workarounds that relieved my pain--just go to Debug|Exceptions..., find the SynchronizationLockException, and uncheck it. As others warned, this will skip over SynchronizationLockExceptions in your code that you want to catch, but that wasn't a concern for me in this case. Thanks, guys; I've used that dialog before, but it's been so long I'd forgotten about it.Now if I could just do the same for Microsoft.CSharp.RuntimeBinder.RuntimeBinderException... Until then, F5 it is.

    Read the article

  • Visage

    - by Geertjan
    Raj, the Chennai JUG lead, together with others from that JUG, is interested in Visage, the JavaFX script language closely associated with Stephen Chin. He sent me the related lexer and parser and I started by having a look at them in the new version of ANTLRWorks being developed by Sam Harwell (who demonstrated it very effectively during JavaOne): Notice how the lexer and parser are shown in a tree structure, as well as in a cool syntax diagram. Next, I downloaded a bunch of JARs from here, so that packages such as from "com.sun.tools.mjavac" can be used, i.e., these are Visage-specific packages that aren't found anywhere except in the location below: http://code.google.com/p/visage/wiki/GettingStarted It turns out that there's also a Visage NetBeans plugin out there: http://code.google.com/p/visage/source/browse/?repo=netbeans-plugin Rather than recreating everything from scratch, i.e., generating ANTLR Java classes from the lexer and parser, I copied a lot of stuff from the site above and now a file Raj sent me looks as follows, i.e., basic syntax coloring is shown: For anyone wanting to seriously support Visage in NetBeans IDE, I recommend downloading the existing Visage NetBeans plugin above, rather than creating everything yourself from scratch, and then figuring out how to use that code in some way, i.e., add the JARs I pointed to above, and work on its build.xml file, which could be frustrating in the beginning, but there's no point in recreating everything if everything already exists.

    Read the article

  • How can I cull non-visible isometric tiles?

    - by james
    I have a problem which I am struggling to solve. I have a large map of around 100x100 tiles which form an isometric map. The user is able to move the map around by dragging the mouse. I am trying to optimize my game only to draw the visible tiles. So far my code is like this. It appears to be ok in the x direction, but as soon as one tile goes completely above the top of the screen, the entire column disappears. I am not sure how to detect that all of the tiles in a particular column are outside the visible region. double maxTilesX = widthOfScreen/ halfTileWidth + 4; double maxTilesY = heightOfScreen/ halfTileHeight + 4; int rowStart = Math.max(0,( -xOffset / halfTileWidth)) ; int colStart = Math.max(0,( -yOffset / halfTileHeight)); rowEnd = (int) Math.min(mapSize, rowStart + maxTilesX); colEnd = (int) Math.min(mapSize, colStart + maxTilesY); EDIT - I think I have solved my problem, but perhaps not in a very efficient way. I have taken the center of the screen coordinates, determined which tile this corresponds to by converting the coordinates into cartesian format. I then update the entire box around the screen.

    Read the article

  • Issue tracking multiple domains with Google Analytics

    - by user359650
    I have 2 domains mydomain.com and mydomain.net which I'm trying to track with the same GA code. Here are the options I turned on: Subdomains of mydomain ON Examples: www.mydomain.com -and- apps.mydomain.com -and- store.mydomain.com Multiple top-level domains of mydomain ON Examples: mydomain.uk -and- mydomain.cn -and- mydomain.fr Which gave me the following code: _gaq.push(['_setAccount', 'UA-123456789-1']); _gaq.push(['_setDomainName', 'mydomain.com']); _gaq.push(['_setAllowLinker', true]); _gaq.push(['_trackPageview']); In this help page I read that _setDomainName must be changed for each domain which I did: -if you go to mydomain.net you get _gaq.push(['_setDomainName', 'mydomain.net']); -if you go to mydomain.com you get _gaq.push(['_setDomainName', 'mydomain.com']); When I generate traffic on both mydomain.dom and mydomain.net and watches GA push requests made with firebug I can see requests generated for both domains and the parameter called utmhn has the proper domain value (which matches that of _setDomainName and the browser address bar). However when I monitor the realtime statistics under Home->Real-Time->Overview I see pageviews for mydomain.net BUT NOT for mydomain.dom :( What am I missing to properly track both domains? PS: in the help page I mentioned they talk about setting up cross links which I didn't do for now as my understanding is that it shouldn't be needed to get what I'm trying to do to work. Also I want to mention that I do not have any tracking code for any of these 2 domains other than the one I mentioned.

    Read the article

  • C++ Unlocking a std::mutex before calling std::unique_lock wait

    - by Sant Kadog
    I have a multithreaded application (using std::thread) with a manager (class Tree) that executes some piece of code on different subtrees (embedded struct SubTree) in parallel. The basic idea is that each instance of SubTree has a deque that store objects. If the deque is empty, the thread waits until a new element is inserted in the deque or the termination criteria is reached. One subtree can generate objects and push them in the deque of another subtree. For convenience, all my std::mutex, std::locks and std::variable_condition are stored in a struct called "locks". The class Tree creates some threads that run the following method (first attempt) : void Tree::launch(SubTree & st, Locks & locks ) { /* some code */ std::lock_guard<std::mutex> deque_lock(locks.deque_mutex_[st.id_]) ; // lock the access to the deque of subtree st if (st.deque_.empty()) // check that the deque is still empty { // some threads are still running, wait for them to terminate std::unique_lock<std::mutex> wait_lock(locks.restart_mutex_[st.id_]) ; locks.restart_condition_[st.id_].wait(wait_lock) ; } /* some code */ } The problem is that "deque_lock" is still locked while the thread is waiting. Hence no object can be added in the deque of the current thread by a concurrent one. So I turned the lock_guard into a unique_lock and managed the lock/unlock manually : void launch(SubTree & st, Locks & locks ) { /* some code */ std::unique_lock<std::mutex> deque_lock(locks.deque_mutex_[st.id_]) ; // lock the access to the deque of subtree st if (st.deque_.empty()) // check that the deque is still empty { deque_lock.unlock() ; // unlock the access to the deque to enable the other threads to add objects // DATA RACE : nothing must happen to the unprotected deque here !!!!!! // some threads are still running, wait for them to terminate std::unique_lock<std::mutex> wait_lock(locks.restart_mutex_[st.id_]) ; locks.restart_condition_[st.id_].wait(wait_lock) ; } /* some code */ } The problem now, is that there is a data race, and I would like to make sure that the "wait" instruction is performed directly after the "deque_lock.unlock()" one. Would anyone know a way to create such a critical instruction sequence with the standard library ? Thanks in advance.

    Read the article

  • Error instantiating Texture2D in MonoGame for Windows 8 Metro Apps

    - by JimmyBoh
    I have an game which builds for WindowsGL and Windows8. The WindowsGL works fine, but the Windows8 build throws an error when trying to instantiate a new Texture2D. The Code: var texture = new Texture2D(CurrentGame.SpriteBatch.GraphicsDevice, width, 1); // Error thrown here... texture.setData(FunctionThatReturnsColors()); You can find the rest of the code on Github. The Error: SharpDX.SharpDXException was unhandled by user code HResult=-2147024809 Message=HRESULT: [0x80070057], Module: [Unknown], ApiCode: [Unknown/Unknown], Message: The parameter is incorrect. Source=SharpDX StackTrace: at SharpDX.Result.CheckError() at SharpDX.Direct3D11.Device.CreateTexture2D(Texture2DDescription& descRef, DataBox[] initialDataRef, Texture2D texture2DOut) at SharpDX.Direct3D11.Texture2D..ctor(Device device, Texture2DDescription description) at Microsoft.Xna.Framework.Graphics.Texture2D..ctor(GraphicsDevice graphicsDevice, Int32 width, Int32 height, Boolean mipmap, SurfaceFormat format, Boolean renderTarget) at Microsoft.Xna.Framework.Graphics.Texture2D..ctor(GraphicsDevice graphicsDevice, Int32 width, Int32 height) at BrewmasterEngine.Graphics.Content.Gradient.CreateHorizontal(Int32 width, Color left, Color right) in c:\Projects\Personal\GitHub\BrewmasterEngine\BrewmasterEngine\Graphics\Content\Gradient.cs:line 16 at SampleGame.Menu.Widgets.GradientBackground.UpdateBounds(Object sender, EventArgs args) in c:\Projects\Personal\GitHub\BrewmasterEngine\SampleGame\Menu\Widgets\GradientBackground.cs:line 39 at SampleGame.Menu.Widgets.GradientBackground..ctor(Color start, Color stop, Int32 scrollamount, Single scrollspeed, Boolean horizontal) in c:\Projects\Personal\GitHub\BrewmasterEngine\SampleGame\Menu\Widgets\GradientBackground.cs:line 25 at SampleGame.Scenes.IntroScene.Load(Action done) in c:\Projects\Personal\GitHub\BrewmasterEngine\SampleGame\Scenes\IntroScene.cs:line 23 at BrewmasterEngine.Scenes.Scene.LoadScene(Action`1 callback) in c:\Projects\Personal\GitHub\BrewmasterEngine\BrewmasterEngine\Scenes\Scene.cs:line 89 at BrewmasterEngine.Scenes.SceneManager.Load(String sceneName, Action`1 callback) in c:\Projects\Personal\GitHub\BrewmasterEngine\BrewmasterEngine\Scenes\SceneManager.cs:line 69 at BrewmasterEngine.Scenes.SceneManager.LoadDefaultScene(Action`1 callback) in c:\Projects\Personal\GitHub\BrewmasterEngine\BrewmasterEngine\Scenes\SceneManager.cs:line 83 at BrewmasterEngine.Framework.Game2D.LoadContent() in c:\Projects\Personal\GitHub\BrewmasterEngine\BrewmasterEngine\Framework\Game2D.cs:line 117 at Microsoft.Xna.Framework.Game.Initialize() at BrewmasterEngine.Framework.Game2D.Initialize() in c:\Projects\Personal\GitHub\BrewmasterEngine\BrewmasterEngine\Framework\Game2D.cs:line 105 at Microsoft.Xna.Framework.Game.DoInitialize() at Microsoft.Xna.Framework.Game.Run(GameRunBehavior runBehavior) at Microsoft.Xna.Framework.Game.Run() at Microsoft.Xna.Framework.MetroFrameworkView`1.Run() InnerException: Is this an error that needs to be solved in MonoGame, or is there something that I need to do differently in my engine and game?

    Read the article

  • GetContactList stops reporting collisions on welded bodies

    - by Henrique Jung
    I have some strange problem with my game which uses Box2D as physics engine and I'm out of ideas on what I can do to solve it. My game is a class assignment where I need to build a simple game where the main character moves in a 2D environment while square blocks comes from below him. Each time a collision occurs, that block is attached to the character using a weld joint, when three blocks of the same colors are together, they annihilate themselves(an effect similar to Bejeweled). I'm using a recursive function to iterate through all the attached blocks of a given block to see if there are enough blocks for them to be deleted. I'm using GetContactList function to iterate through the list of contacts to see which blocks are adjacent to each other. The results are quite disappointing, the blocks only get annihilated in few cases. After a lot of debugging, I found the issue, but I still don't know how to solve. My issue is: after some time, GetContactList STOPS returning contacts (return NULL) to blocks that were already attached for some time. I spent some time reading the Box2D manual as well as some tutorials and still didn't find any clue of what is happening. Below there's some simplified version of the code that I wrote. for(int a = 0; a < blocksList.size(); a++) { blocksList[a].BuildConnections(); } And on BuildConnections b2ContactEdge* edge = body->GetContactList(); while(edge != NULL) { if (long_check_to_see_if_there's_a_block_nearby) { // add itself to the list to be anihilated globalList.push_back(this); //if there's, call BuildConnections again on the adjacent block adjacentBody->GetUserData()->BuildConnections; } edge = edge->next; } I know that there's another issue related to circular inclusions, but I fairly sure that this problem isn't causing the problem with the collisions. You can download my entire code from this page if you'd like http://code.google.com/p/fellz/source/list

    Read the article

  • Draw Rectangle To All Dimensions of Image

    - by opiop65
    I have some rudimentary collision code: public class Collision { static boolean isColliding = false; static Rectangle player; static Rectangle female; public static void collision(){ Rectangle player = Game.Playerbounds(); Rectangle female = Game.Femalebounds(); if(player.intersects(female)){ isColliding = true; }else{ isColliding = false; } } } And this is the rectangle code: public static Rectangle Playerbounds() { return(new Rectangle(posX, posY, 25, 25)); } public static Rectangle Femalebounds() { return(new Rectangle(femaleX, femaleY, 25, 25)); } My InputHandling class: public static void movePlayer(GameContainer gc, int delta){ Input input = gc.getInput(); if(input.isKeyDown(input.KEY_W)){ Game.posY -= walkSpeed * delta; walkUp = true; if(Collision.isColliding == true){ Game.posY += walkSpeed * delta; } } if(input.isKeyDown(input.KEY_S)){ Game.posY += walkSpeed * delta; walkDown = true; if(Collision.isColliding == true){ Game.posY -= walkSpeed * delta; } } if(input.isKeyDown(input.KEY_D)){ Game.posX += walkSpeed * delta; walkRight = true; if(Collision.isColliding == true){ Game.posX -= walkSpeed * delta; } } if(input.isKeyDown(input.KEY_A)){ Game.posX -= walkSpeed * delta; walkLeft = true; if(Collision.isColliding == true){ Game.posX += walkSpeed * delta; } } } The code works partially. Only the right and top side of the images collide. How do I correct the rectangle so it will draw on all sides? Thanks for any suggestions!

    Read the article

  • Delphi LoadLibrary Failing to find DLL other directory - any good options?

    - by Chris Thornton
    Two Delphi programs need to load foo.dll, which contains some code that injects a client-auth certificate into a SOAP request. foo.dll resides in c:\fooapp\foo.dll and is normally loaded by c:\fooapp\foo.exe. That works fine. The other program needs the same functionality, but it resides in c:\program files\unwantedstepchild\sadapp.exe. Both aps load the DLL with this code: FOOLib := LoadLibrary('foo.dll'); ... If FOOLib <> 0 then begin FOOProc := GetProcAddress(FOOLib , 'xInjectCert'); FOOProc(myHttpRequest, Data, CertName); end; It works great for foo.exe, as the dll is right there. sadapp.exe fails to load the library, so FOOLib is 0, and the rest never gets called. The sadapp.exe program therefore silently fails to inject the cert, and when we test against production, it the cert is missing, do the connection fails. Obviously, we should have fully-qualified the path to the DLL. Without going into a lot of details, there were aspects of the testing that masked this problem until recently, and now it's basically too late to fix in code, as that would require a full regression test, and there isn't time for that. Since we've painted ourselves into a corner, I need to know if there are any options that I've overlooked. While we can't change the code (for this release), we CAN tweak the installer. I've found that placing c:\fooapp into the path works. As does adding a second copy of foo.dll directly into c:\program files\unwantedstepchild. c:\fooapp\foo.exe will always be running while sadapp.exe is running, so I was hoping that Windows would find it that way, but apparently not. Is there a way to tell Windows that I really want that same DLL? Maybe a manifest or something? This is the sort of "magic bullet" that I'm looking for. I know I can: Modify the windows path, probably in the installer. That's ugly. Add a second copy of the DLL, directly into the unwantedstepchild folder. Also ugly Delay the project while we code and test a proper fix. Unacceptable. Other? Thanks for any guidance, especially with "Other". I understand that this issue is not necessarily specific to Delphi. Thanks!

    Read the article

  • New partnership allows auto-transposition of client/server application to Windows Azure

    - by Webgui
    The economics of IT is changing rapidly, and organizations are searching to widen and secure availability of their systems and at the same time lower costs which is exactly what the cloud meant to do. Running your systems on Microsoft’s Windows Azure cloud for example would improve and secure the availability, accessibility and scalability (both up and down) of your systems and support the new IT economics. However, in order to take advantage of the cloud's promise of lower cost of ownership, the applications must be built or adjusted to work on that platform and in most cases this is not a simple task.  Even existing web applications cannot always be transferred to Azure without some changes, and for client/server applications, the task is way more challenging even to the point where it seems impossible. The reason is the gaps between the client/server desktop technology and the cloud's. For that reason, most of the known methodologies to migrate existing client/server applications actually involve rewrite of the desktop systems for the cloud. A unique approach is introduced by Visual WebGui which creates a virtualization layer atop ASP.Net web server, it moves the transformed or generated .Net code to that layer, and then using a patent pending protocol it renders a user interface within a plain browser. The end result is pure .NET code that is a base code for a pure rich web application and now due to a collaboration with Microsoft Windows Azure Visual WebGui provides the shortest path from client/server to the Azure cloud by being able to handle close to 95% of the transformation to the cloud platform in an automatic way. Application Migration to Azure without migraines More information about the Instant CloudMove Azure solution here.

    Read the article

  • PHP: How to implement a __get-like method for local function variables

    - by Tom Frost
    I'm no stranger to __get(), and have used it to make some very convenient libraries in the past. However, I'm faced with a new challenge (PHP 5.3, abbreviated and simplified my code for this question): <?php namespace test; class View { function __construct($filename, $varArray) { $this->filename = $filename; $this->varArray = $varArray; } function display() { include($this->filename); } function __get($varName) { if (isset($this->varArray[$varName])) return $this->varArray[$varName]; return "?? $varname ??"; } } ?> Above is a very, very simplified system for loading a View. This code would call the view and display it: <?php require_once("View.php"); use test\View; $view = new View("views/myview.php", array("user" => "Tom")); $view->display(); ?> My goal for this code is to allow the view "myview.php" to contain code like this: <p> Hello <?php echo $user; ?>! Your E-mail is <?php echo $email; ?> </p> And, used with the above code, this would output "Hello Tom! Your E-mail is ?? email ??" However, this won't work. The view is being included within a class method, so when it refers to $user and $email, it's looking for local function variables -- not variables belonging to the View class. For this reason, __get never gets triggered. I could change all my view's variables to things like $this-user and $this-email, but that would be a messy and unintuitive solution. I'd love to find a way where I can reference variables directly WITHOUT having PHP throw an error when an undefined variable is used. Thoughts? Is there a clean way to do this, or am I forced to resort to hacky solutions?

    Read the article

  • Mobile cross-platform SDK for computationally intensive apps

    - by K.Steff
    I am aware of the PhoneGap toolkit for creating mobile applications for virtually all mobile platforms with a significant market share. However, the code in PhoneGap that is shared between the different platforms is written in JavaScript. While I like JS, I think it's hardly appropriate for computationally intensive tasks. The situation with Titanium is pretty much the same. So, is there any way that I can create a cross-platform mobile app that has the computationally intensive code shared between the platforms? Some context: Obviously, I don't want to implement the time consuming algorithm in many different languages, since this violates DRY, increases the chance for bugs slipping in at least one version and require boilerplate code to work. I've looked at Xamarin's MonoTouch and Mono for Android tools, but while they cover iOS and Android, they're not nearly as versatile for deployment as PhoneGap. On the other hand, (IMO) the statically typed nature of C# is more suited for intense computation than JS. Are there any other SDK/tools appropriate for the task that I don't know about or a point about the mentioned above that I've missed? Also, uploading data to a web service for processing is not an option, because of the traffic required.

    Read the article

  • What I saw at TechEd North America 2014

    - by Brian Schroer
    Originally posted on: http://geekswithblogs.net/brians/archive/2014/05/19/teched-north-america-2014.aspxI was thrilled to be able to attend TechEd North America 2014 in Houston last week. I got to go to Orlando in 2008, and since then I’ve had to settle for watching the sessions online (which ain’t bad – They’re all available on Channel 9 for streaming or downloading. Here are links to the Developer Track sessions and to the sessions from all tracks.) The sessions I attended (with my favorites bolded) were: Shiny new stuff The Microsoft Application Platform for Developers: Create Applications That Span Devices and Services INTRODUCING: The Future of .NET on the Server DEEP DIVE: The Future of .NET on the Server ASP.NET: Building Web Application Using ASP.NET and Visual Studio The Next Generation of .NET for Building Applications The Future of Visual Basic and C# Stuff you can use now Building Rich Apps with AngularJS on ASP.NET Get the Most Out of Your Code Maps SignalR: Building Real-Time Applications with ASP.NET SignalR Performance Optimize Your ASP.NET Web App Modern Web and Visual Studio Visual Studio Power User: Tips and Tricks Debugging Tips and Tricks in Visual Studio 2013 In a world where the whole company uses TFS… Using Functional, Exploratory and Acceptance Testing to Release with Confidence A Practical View of Release Management for Visual Studio 2013 From Vanity to Value, Metrics That Matter: Improving Lean and Agile, Kanban, and Scrum Ain’t Nobody Got Time for That As usual, there were some time slots with nothing of interest and others with 5 things I wanted to see at the same time. Here are the sessions I’m still planning to watch… Getting Started with TypeScript Building a Large Scale JavaScript Application in TypeScript Modern Application Lifecycle Management Why a Hacker Can Own Your Web Servers in a Day! Async Best Practices for C# and Visual Basic Building Multi-Device Apps with the New Visual Studio Tooling for Apache Cordova Applying S.O.L.I.D. Principles in .NET/C# Native Mobile Application Development for iOS, Android, and Windows in C# and Visual Studio Using Xamarin Latest Innovations in Developing ASP.NET MVC Web Applications Zero to Hero: Untested to Tested with Microsoft Fakes Using Visual Studio Cool and Elegant ASP.NET Web Forms with HTML 5 for the Modern Web The Present and Future of .NET in a World of Devices and Services

    Read the article

  • Does it make sense to write a build scripts in C++?

    - by Klaim
    I'm using CMake to generate my projects IDE/makefiles, but I still need to call custom "scripts" to manipulate my compiled files or even generate code. In previous projects I've been using Python and it was OK, but now I'm having serious trouble managing a lot of dependencies in two very big projects I'm working on so I want to minimize the dependencies everywhere. Someone suggested to me to use C++ to write my build scripts instead of adding a language dependency just for that. The projects themeselves already use C++ so there are several advantages that I can see: to build the whole project, only a C++ compiler and CMake would be necessary, nothing else (all the other dependencies are C or C++); C++ type safety (when using modern C++) makes everything easier to get "correct"; it's also the language I know the better so I'm more at ease with it even if I'm able to write some good Python code; potential gain in execution speed (but i don't think it will really be perceptible); However, I think there might be some drawbacks and I'm not sure of the real impact as I didn't try yet: might be longer to write the code (that said I'm not sure because I'm efficient enough in C++ to write something that work quickly, so maybe for this system it wouldn't be so long to write) (compilation time shouldn't be a problem for this case); I must assume that all the text files I'll read as input are in UTF-8, I'm not sure it can be easilly checked at runtime in C++ and the language will not check it for you; libraries in C++ are harder to manage than in scripting languages; I lack experience and forsight so maybe I'm missing advantages and drawbacks. So the question is: does it make sense to use C++ for this? do you have experiences to report and do you see advantages and disadvantages that might be important?

    Read the article

  • ASMLib

    - by wcoekaer
    Oracle ASMlib on Linux has been a topic of discussion a number of times since it was released way back when in 2004. There is a lot of confusion around it and certainly a lot of misinformation out there for no good reason. Let me try to give a bit of history around Oracle ASMLib. Oracle ASMLib was introduced at the time Oracle released Oracle Database 10g R1. 10gR1 introduced a very cool important new features called Oracle ASM (Automatic Storage Management). A very simplistic description would be that this is a very sophisticated volume manager for Oracle data. Give your devices directly to the ASM instance and we manage the storage for you, clustered, highly available, redundant, performance, etc, etc... We recommend using Oracle ASM for all database deployments, single instance or clustered (RAC). The ASM instance manages the storage and every Oracle server process opens and operates on the storage devices like it would open and operate on regular datafiles or raw devices. So by default since 10gR1 up to today, we do not interact differently with ASM managed block devices than we did before with a datafile being mapped to a raw device. All of this is without ASMLib, so ignore that one for now. Standard Oracle on any platform that we support (Linux, Windows, Solaris, AIX, ...) does it the exact same way. You start an ASM instance, it handles storage management, all the database instances use and open that storage and read/write from/to it. There are no extra pieces of software needed, including on Linux. ASM is fully functional and selfcontained without any other components. In order for the admin to provide a raw device to ASM or to the database, it has to have persistent device naming. If you booted up a server where a raw disk was named /dev/sdf and you give it to ASM (or even just creating a tablespace without asm on that device with datafile '/dev/sdf') and next time you boot up and that device is now /dev/sdg, you end up with an error. Just like you can't just change datafile names, you can't change device filenames without telling the database, or ASM. persistent device naming on Linux, especially back in those days ways to say it bluntly, a nightmare. In fact there were a number of issues (dating back to 2004) : Linux async IO wasn't pretty persistent device naming including permissions (had to be owned by oracle and the dba group) was very, very difficult to manage system resource usage in terms of open file descriptors So given the above, we tried to find a way to make this easier on the admins, in many ways, similar to why we started working on OCFS a few years earlier - how can we make life easier for the admins on Linux. A feature of Oracle ASM is the ability for third parties to write an extension using what's called ASMLib. It is possible for any third party OS or storage vendor to write a library using a specific Oracle defined interface that gets used by the ASM instance and by the database instance when available. This interface offered 2 components : Define an IO interface - allow any IO to the devices to go through ASMLib Define device discovery - implement an external way of discovering, labeling devices to provide to ASM and the Oracle database instance This is similar to a library that a number of companies have implemented over many years called libODM (Oracle Disk Manager). ODM was specified many years before we introduced ASM and allowed third party vendors to implement their own IO routines so that the database would use this library if installed and make use of the library open/read/write/close,.. routines instead of the standard OS interfaces. PolyServe back in the day used this to optimize their storage solution, Veritas used (and I believe still uses) this for their filesystem. It basically allowed, in particular, filesystem vendors to write libraries that could optimize access to their storage or filesystem.. so ASMLib was not something new, it was basically based on the same model. You have libodm for just database access, you have libasm for asm/database access. Since this library interface existed, we decided to do a reference implementation on Linux. We wrote an ASMLib for Linux that could be used on any Linux platform and other vendors could see how this worked and potentially implement their own solution. As I mentioned earlier, ASMLib and ODMLib are libraries for third party extensions. ASMLib for Linux, since it was a reference implementation implemented both interfaces, the storage discovery part and the IO part. There are 2 components : Oracle ASMLib - the userspace library with config tools (a shared object and some scripts) oracleasm.ko - a kernel module that implements the asm device for /dev/oracleasm/* The userspace library is a binary-only module since it links with and contains Oracle header files but is generic, we only have one asm library for the various Linux platforms. This library is opened by Oracle ASM and by Oracle database processes and this library interacts with the OS through the asm device (/dev/asm). It can install on Oracle Linux, on SuSE SLES, on Red Hat RHEL,.. The library itself doesn't actually care much about the OS version, the kernel module and device cares. The support tools are simple scripts that allow the admin to label devices and scan for disks and devices. This way you can say create an ASM disk label foo on, currently /dev/sdf... So if /dev/sdf disappears and next time is /dev/sdg, we just scan for the label foo and we discover it as /dev/sdg and life goes on without any worry. Also, when the database needs access to the device, we don't have to worry about file permissions or anything it will be taken care of. So it's a convenience thing. The kernel module oracleasm.ko is a Linux kernel module/device driver. It implements a device /dev/oracleasm/* and any and all IO goes through ASMLib - /dev/oracleasm. This kernel module is obviously a very specific Oracle related device driver but it was released under the GPL v2 so anyone could easily build it for their Linux distribution kernels. Advantages for using ASMLib : A good async IO interface for the database, the entire IO interface is based on an optimal ASYNC model for performance A single file descriptor per Oracle process, not one per device or datafile per process reducing # of open filehandles overhead Device scanning and labeling built-in so you do not have to worry about messing with udev or devlabel, permissions or the likes which can be very complex and error prone. Just like with OCFS and OCFS2, each kernel version (major or minor) has to get a new version of the device drivers. We started out building the oracleasm kernel module rpms for many distributions, SLES (in fact in the early days still even for this thing called United Linux) and RHEL. The driver didn't make sense to get pushed into upstream Linux because it's unique and specific to the Oracle database. As it takes a huge effort in terms of build infrastructure and QA and release management to build kernel modules for every architecture, every linux distribution and every major and minor version we worked with the vendors to get them to add this tiny kernel module to their infrastructure. (60k source code file). The folks at SuSE understood this was good for them and their customers and us and added it to SLES. So every build coming from SuSE for SLES contains the oracleasm.ko module. We weren't as successful with other vendors so for quite some time we continued to build it for RHEL and of course as we introduced Oracle Linux end of 2006 also for Oracle Linux. With Oracle Linux it became easy for us because we just added the code to our build system and as we churned out Oracle Linux kernels whether it was for a public release or for customers that needed a one off fix where they also used asmlib, we didn't have to do any extra work it was just all nicely integrated. With the introduction of Oracle Linux's Unbreakable Enterprise Kernel and our interest in being able to exploit ASMLib more, we started working on a very exciting project called Data Integrity. Oracle (Martin Petersen in particular) worked for many years with the T10 standards committee and storage vendors and implemented Linux kernel support for DIF/DIX, data protection in the Linux kernel, note to those that wonder, yes it's all in mainline Linux and under the GPL. This basically gave us all the features in the Linux kernel to checksum a data block, send it to the storage adapter, which can then validate that block and checksum in firmware before it sends it over the wire to the storage array, which can then do another checksum and to the actual DISK which does a final validation before writing the block to the physical media. So what was missing was the ability for a userspace application (read: Oracle RDBMS) to write a block which then has a checksum and validation all the way down to the disk. application to disk. Because we have ASMLib we had an entry into the Linux kernel and Martin added support in ASMLib (kernel driver + userspace) for this functionality. Now, this is all based on relatively current Linux kernels, the oracleasm kernel module depends on the main kernel to have support for it so we can make use of it. Thanks to UEK and us having the ability to ship a more modern, current version of the Linux kernel we were able to introduce this feature into ASMLib for Linux from Oracle. This combined with the fact that we build the asm kernel module when we build every single UEK kernel allowed us to continue improving ASMLib and provide it to our customers. So today, we (Oracle) provide Oracle ASMLib for Oracle Linux and in particular on the Unbreakable Enterprise Kernel. We did the build/testing/delivery of ASMLib for RHEL until RHEL5 but since RHEL6 decided that it was too much effort for us to also maintain all the build and test environments for RHEL and we did not have the ability to use the latest kernel features to introduce the Data Integrity features and we didn't want to end up with multiple versions of asmlib as maintained by us. SuSE SLES still builds and comes with the oracleasm module and they do all the work and RHAT it certainly welcome to do the same. They don't have to rebuild the userspace library, it's really about the kernel module. And finally to re-iterate a few important things : Oracle ASM does not in any way require ASMLib to function completely. ASMlib is a small set of extensions, in particular to make device management easier but there are no extra features exposed through Oracle ASM with ASMLib enabled or disabled. Often customers confuse ASMLib with ASM. again, ASM exists on every Oracle supported OS and on every supported Linux OS, SLES, RHEL, OL withoutASMLib Oracle ASMLib userspace is available for OTN and the kernel module is shipped along with OL/UEK for every build and by SuSE for SLES for every of their builds ASMLib kernel module was built by us for RHEL4 and RHEL5 but we do not build it for RHEL6, nor for the OL6 RHCK kernel. Only for UEK ASMLib for Linux is/was a reference implementation for any third party vendor to be able to offer, if they want to, their own version for their own OS or storage ASMLib as provided by Oracle for Linux continues to be enhanced and evolve and for the kernel module we use UEK as the base OS kernel hope this helps.

    Read the article

  • Java "compare cannot be resolved to a type" error

    - by King Triumph
    I'm getting a strange error when attempting to use a comparator with a binary search on an array. The error states that "compareArtist cannot be resolved to a type" and is thrown by Eclipse on this code: Comparator<Song> compare = new Song.compareArtist(); I've done some searching and found references to a possible bug with Eclipse, although I have tried the code on a different computer and the error persists. I've also found similar issues regarding the capitalization of the compare method, in this case compareArtist. I've seen examples where the first word in the method name is capitalized, although it was my understanding that method names are traditionally started with a lower case letter. I have experimented with changing the capitalization but nothing has changed. I have also found references to this error if the class doesn't import the correct package. I have imported java.util in both classes in question, which to my knowledge allows the use of the Comparator. I've experimented with writing the compareArtist method within the class that has the binary search call as well as in the "Song" class, which according to my homework assignment is where it should be. I've changed the constructor accordingly and the issue persists. Lastly, I've attempted to override the Comparator compare method by implementing Comparator in the Song class and creating my own method called "compare". This returns the same error. I've only moved to calling the comparator method something different than "compare" after finding several examples that do the same. Here is the relevant code for the class that calls the binary search that uses the comparator. This code also has a local version of the compareArtist method. While it is not being called currently, the code for this method is the same as the in the class Song, where I am trying to call it from. Thanks for any advice and insight. import java.io.*; import java.util.*; public class SearchByArtistPrefix { private Song[] songs; // keep a direct reference to the song array private Song[] searchResults; // holds the results of the search private ArrayList<Song> searchList = new ArrayList<Song>(); // hold results of search while being populated. Converted to searchResults array. public SearchByArtistPrefix(SongCollection sc) { songs = sc.getAllSongs(); } public int compareArtist (Song firstSong, Song secondSong) { return firstSong.getArtist().compareTo(secondSong.getArtist()); } public Song[] search(String artistPrefix) { String artistInput = artistPrefix; int searchLength = artistInput.length(); Song searchSong = new Song(artistInput, "", ""); Comparator<Song> compare = new Song.compareArtist(); int search = Arrays.binarySearch(songs, searchSong, compare);

    Read the article

  • Libgdx ParallaxScrolling and TiledMaps

    - by kirchhoff
    I implemented ParallaxScrolling for my SideScroller project, everything is working but the tiled map (the most important part!). I've been trying out everything but it doesn't work (see the code below). I'm using ParallaxCamera from GdxTests, it's working perfectly for the background layers. I can't explain myself properly in english, so I recorded 2 videos: Before parallaxScrolling After parallaxScrolling As you can see, now the platforms appear in the middle of the Y-axis. I've got a Map class with 2 tiled maps, so I need two renderers too: private TiledMapRenderer renderer1; private TiledMapRenderer renderer2; public void update(GameCamera camera) { renderer1.setView(camera.calculateParallaxMatrix(1f, 0f), camera.position.x - camera.viewportWidth / 2, **camera.position.y - camera.viewportHeight/2**, camera.viewportWidth, camera.viewportHeight); renderer2.setView(camera.calculateParallaxMatrix(1f, 0f), camera.position.x - camera.viewportWidth / 2, **camera.position.y - camera.viewportHeight/2**, camera.viewportWidth, camera.viewportHeight); } In bold, the code I think I should change. I've tried changing parameters, even adding hardcoded values, etc, but one of two: 1. Nothing happens. 2. Platforms disappear. Here is some aditional code. The render method: world.update(delta); parallaxBackground.update(camera); clear(0.5f, 0.7f, 1.0f, 1); batch.setProjectionMatrix(camera.calculateParallaxMatrix(0, 0)); batch.disableBlending(); batch.begin(); batch.draw(background, -(int)background.getRegionWidth()/2, -(int)background.getRegionHeight()/2); batch.end(); batch.enableBlending(); parallaxBackground.draw(batch, camera); renderer.render(batch);

    Read the article

  • Simple issue tracker for 1-2 developers

    - by devoured elysium
    (I'm not sure whether this pertains to the realm of programmers@se or so@se) I'm currently working mostly alone on a project (in Java). I'm mostly alone as I have an advisor that gives me high level instructions on what to do, and will seldom make any code contribution. She will code in a couple of acceptance tests from time to time, though. I've never used an Issue Tracker before, and was thinking about starting to use one now, as I'd like to have a place where I can log possible bugs I find and keep track of them in a centralized manner. Would it be possible to integrate the issue tracker with Eclipse, better yet. So here are the constraints: It's NOT a open-source project. Our code is not to be shared with anyone! we are and will be using Subversion; we have our own Subversion server and we will keep using this same Subversion server; it must be free; it must allow at least 2 users. What is your advice on what to pick? I'm looking for the simplest solution available!

    Read the article

  • Just started a job with Scrum. Something seems to be missing. I am new to Scrum

    - by punkouter
    The code is a complete mess of a combination of classic ASP/ASP.NET. The scrum consist of us patching up the big mess or making additions to it. We are all too busy doing that to start a rewrite so I am wondering.. Where is the part in Scrum where the developers can have the power to say that enough is enough and demand that they are given time to start the big rewrite ? We seem in an endless loop of just patching old code with 'Stories'. So things are being run by the non-technical people who seem to have no desire to push for a rewrite because they don't understand how bad the code base has gotten.. So who is in charge of making this big rewrite change happen ? The devs? The scrum master? The current strategy is just find time and do it ourselves without the higher ups involved.. since they are mostly to blame for the current mess we are in.. <-insert rant about non-tech people telling tech people what to do here-

    Read the article

< Previous Page | 629 630 631 632 633 634 635 636 637 638 639 640  | Next Page >