Search Results

Search found 16311 results on 653 pages for 'environment variables'.

Page 427/653 | < Previous Page | 423 424 425 426 427 428 429 430 431 432 433 434  | Next Page >

  • Continuous Retraining Tutorials

    - by foampile
    I am looking for an online resource in which you can sortof design your future professional profile and it would provide you a set of tutorials that you would complete to get a basic level of familiarity with related technologies. One of my professional problems is my learning style: I can learn either by direct hands-on experience OR by following a rigid training program that goes in a linear progression. I have a hard time learning things in a multidimensional environment where the biggest challenge is to determine what needs to be learned and how to pick from a ton of books and the least problem is to go through the actual material. So I am looking for a reputable source that will knock those two confusing questions out for me so I can kick back and continuously be upgrading my skills without having to worry about what and how myself. I have found some decent online tutorials for various technologies but never found a single place that has all or most developer education tutorials that all follow the same or similar interface. I am kindof a lazy learner and would rather follow confirmed learning steps than be figuring my own education path just to realize I did it all wrong down the road. Is there a tutorial mega-boutique like that online?

    Read the article

  • How to convince an employer to move to VB.Net for new development?

    - by Dabblernl
    Some history:For the last six months I have been employed at a small firm with just three programmers, my employer among them. The firm maintains two programs written in VB6. I am asssigned as the lead programmer to one of these. In the last six months I did some maintenance and bug hunting, but created some new functionality too. I had an interview last december, which was favorable, and my contract was prolonged. I am very happy with this course of events as I only obtained a .Net certification a year ago and have no other qualifications (in the field of coding, that is). It is my strong opinion that, while migration of the existing program to .Net is advisable, it is paramount that from now on the new functionality should be written in VB.Net class libraries. After some study I found out how simple it is to integrate .Net class libraries into the VB6 development environment and how easy it is to add their functionality to existing installations by using application manifests. So, I have decided that now is the moment to roll up my sleeves and try and convince my employer that he should let me develop new code in VB.Net, using VB6 for maintenance only. We get along quite well, but I think I am going to need all the ammunition I can get to convince him. Any arguments, preferably backed up up ones, are very welcome, even arguments to dissuade me ;-)

    Read the article

  • At the Java DEMOgrounds - ZeroTurnaround and its LiveRebel 2.5

    - by Janice J. Heiss
    At the ZeroTurnaround demo, I spoke with Krishnan Badrinarayanan, their Product Marketing Manager. ZeroTurnaround, the creator of JRebel and LiveRebel, describes itself on their site as a company “dedicated to changing the way the world develops, tests and runs Java applications."“We just launched LiveRebel 2.5 today,” stated Badrinarayanan, “which enables companies to embrace the concept and practice of continuous delivery, which means having a pipeline that takes products right from the developers to an end-user, faster, more frequently -- all the while ensuring that it’s a quality product that does not break in production. So customers don’t feel the discontinuity that something has changed under them and that they can’t deal with the change. And all this happens while there is zero down time.”He pointed out that Salesforce.com is not useable from 3 a.m. to 5 a.m. on Saturday because they are engaged in maintenance. “With LiveRebel 2.5, you can unify the whole delivery chain without having any downtime at all,” he said. “There are many products that tell customers to take their tools and change how they work as an organization so that you they have to conform to the way the tool prescribes them to work as an application team. We take a more pragmatic approach. A lot of companies might use Jenkins or Bamboo to do continuous integration. We extend that. We say, take our product, take LiveRebel okay, and integrate it with Jenkins – you can do that quickly, so that, in half a day, you will be up and running. And let LiveRebel automate your deployment processes and all the automated tasks that go with it. Right from tests to the staging environment to production -- all with zero downtime and with no impact on users currently using the system.” “So if you were to make the update right now and you had 100 users on your system, they would not even know this was happening. It would maintain their sessions and transfer them over to the new version, all in the background.”

    Read the article

  • CSOM (Client Side Object Model) - What's new with SharePoint 2013

    - by KunaalKapoor
    SharePoint CSOMThe Client-Side Object Model or CSOM came out with SharePoint 2010. CSOM is accessible through client.svc but all client.svc calls must go through supported WFC entry points (supported entry points are .NET, Silverlight and JavaScript). So a developer would need to use client side proxy objects exposed by either a .NET assembly or a JavaScript library. Changes with SharePoint 2013REST Capabilities - Direct access to client.svcNew APIs - App ModelREST CapabilitiesOne of the most important changes to the CSOM with SharePoint 2013 is that the web service entry point of client.svc has been extended to allow direct access  via REST-Based web service calls. This is a really critical change since its going to make the SharePoint platform accessible to any other platform, opening the horizons of integration and collaboration with other REST based platforms and devices. OData (a really popular standard data access API for HTTP-based clients) is supported similar to 2010 but will be a more important aspect of SharePoint 2013 development.New API'sCSOM for SharePoint 2013 has been buffed up with several new APIs for not only SharePoint server functionality but also an API for Windows Phone applications. For a SharePoint 2010 farm most of the new APIs mentioned below are available only via server side APIs:SearchTaxonomyPublishingWorkflowUser ProfilesE-DiscoveryAnalyticsBusiness DataIRMFeedsSharePoint 2013 remote APIs being accessible through both CSOM and REST is very important to the new app model where developers can no longer run code in a SharePoint environment nor can they access the server-side APIs. So CSOM plays the savior here.Also, you can now substitute the alias '_api' in order to reference '_vti_bin/client.svc'.

    Read the article

  • How to manage Agile developers working with traditional (serial) business persons?

    - by Riggy
    Good afternoon, My work environment has some problems. Our IT team is trying to be more agile, but we're not really getting buy-in from the business. They attend our daily stand-ups and sprint reviews, and they help with sprint planning, but then they turn around and do 4 months of requirements gathering for a project before moving forward with a (mostly) serial development style. The sprint goals are things like "get XX% closer to release". For the IT team, they've turned the Sprints into a sort of death march. We end a Sprint one day and start a new Sprint the very next day. There's no reflection or changes done between sprints, only during. Having never done any of the agile methodologies before, I haven't had a very pleasant introduction to them. So my questions are: 1) Should there be some time (perhaps a week or so) between sprints to do the reflection/introspection/changes/etc.? Or are back-to-back-to-back sprints the norm? 2) Is there any chance for survival for an agile team with no agile business counter-parts? If not, are there some transitional methodologies or even tips for moving the business towards an iterative if not necessarily agile mindset? 3) Should your entire team be on every sprint? We have almost 20 programmers on a single sprint but working on completely different projects (typically teams of 3-5, sometimes larger). Is it normal to have a single sprint or should we be trying to manage multiple independent sprints? Should we be trying to keep the multiple sprints in concurrent lockstep or should their timetables be allowed to overlap and be flexible? Any thoughts or advice is appreciated. This is my first time coming over from SO for a question, so please let me know if there are better ways to phrase these kinds of questions (faq was rather helpful, but still not sure I'm following it perfectly). Thanks!

    Read the article

  • HTML Manifest for Content Folios

    - by Kyle Hatlestad
    I recently worked on a project to create a custom content folio renderer in WebCenter Content. It needed to output the native files in the folio along with a manifest file in HTML format which would list the contents of the folio along with any designated metadata and a relative link to the file within the download.  This way a person could hand someone the folio download and it would be a self-contained package with all of the content and a single file to display the information on the contents.  The default Zip rendition of the folio will output the web-viewable version of the file with an HDA formatted file for each one. And unless you are fluent in HDA or have a tool to read them, they are difficult to consume. I thought this might be useful for others, so I'm posting a copy of the component here. Beyond the standard instructions for installing a component, there is an environment configuration file (folionativezipwithmanifestrenderer_environment.cfg) which has a couple of options. FolioMetadataManifestList - This is a comma separated list of metadata fields (system or custom) that should be included in the manifest file. FolioMetadataManifestUseOriginalFilename - (True or False) If set to True, the filenames in the zip file will be based on the original filename as it was checked into WebCenter Content.  If False, it will use the 'Name' of the item as defined within the Folio.  This is usually the Title of the item. The component also includes the source code, so feel free to use this as a reference for creating other interesting folios. 

    Read the article

  • Windows 7 – Fun with VHD

    - by guybarrette
    I’m teaching about TFS 2008 next week and I wanted to use TFS in a virtualized environment so I downloaded the TFS + Team Suite VPC image from Microsoft’s Website.  Working with Windows 7, I opened the VM with the built-in Windows Virtual PC.  The VM loads fine but the problems started when I tried to install the VM additions: I simply couldn’t get them to install properly. I then looked at VMware and found that they have a product called VMware Player that can load Virtual PC VMs.  Tried that but VMware Player failed in converting the VHD. I then looked at VirtualBox.  Created a new VM, attached the VHD and bingo!  Worked like a charm.  The only real caveat is that the guest Windows will ask for the OS CDs to install new drivers so you must have either the CD/DVD or the ISO file (sweet!) to proceed. OK, I got it working in VirtualBox but I’m curious why I couldn’t install the additions from Windows 7 Virtual PC onto a Windows Server 2003 VM.  Anyone has a clue? BTW, thanks to Rolly Perreaux who pointed my to his blog where he goes into great details explaining how to use VM images with VirtualBox.  Good stuff! var addthis_pub="guybarrette";

    Read the article

  • Oracle Enterprise Manager 12c Testing-as-a-Service Solution

    - by user810030
    With organizations spending as much as 50 percent of their QA time with non-test related activities like setting up hardware and deploying applications and test tools, the cloud will bring obvious benefits. A key component of Oracle Enterprise Manager our current Application Quality Management products have been helping our customers with application load testing, functional testing and test process management, but also test data management, data masking and real application testing. These products enable customers to thoroughly test applications and their underlying infrastructure to help ensure the best quality, scalability and availability prior to deployment.  Today, Oracle announced Oracle Enterprise Manager 12c Testing-as-a-Service Solution . This solution will allow users to significantly decrease the time needed to setup a complete test environment, while enhancing testing efficiency. Please read the Press Release mentioned above and join us in our Enterprise Manager LinkedIn Group discussion on this topic. (need to be a member). Or visit our booth this week during the EuroSTAR Software Testing conference in Amsterdam where we can demo this solution  I hope you find this helpfull Stay Connected: Twitter |  Facebook |  YouTube |  Linkedin |  Newsletter

    Read the article

  • Layers - Logical seperation vs physical

    - by P.Brian.Mackey
    Some programmers recommend logical seperation of layers over physical. For example, given a DL, this means we create a DL namespace not a DL assembly. Benefits include: faster compilation time simpler deployment Faster startup time for your program Less assemblies to reference Im on a small team of 5 devs. We have over 50 assemblies to maintain. IMO this ratio is far from ideal. I prefer an extreme programming approach. Where if 100 assemblies are easier to maintain than 10,000...then 1 assembly must be easier than 100. Given technical limits, we should strive for < 5 assemblies. New assemblies are created out of technical need not layer requirements. Developers are worried for a few reasons. A. People like to work in their own environment so they dont step on eachothers toes. B. Microsoft tends to create new assemblies. E.G. Asp.net has its own DLL, so does winforms. Etc. C. Devs view this drive for a common assembly as a threat. Some team members Have a tendency to change the common layer without regard for how it will impact dependencies. My personal view: I view A. as silos, aka cowboy programming and suggest we implement branching to create isolation. C. First, that is a human problem and we shouldnt create technical work arounds for human behavior. Second, my goal is not to put everything in common. Rather, I want partitions to be made in namespaces not assemblies. Having a shared assembly doesnt make everything common. I want the community to chime in and tell me if Ive gone off my rocker. Is a drive for a single assembly or my viewpoint illogical or otherwise a bad idea?

    Read the article

  • Approach to Authenticate Clients to TCP Server

    - by dab
    I'm writing a Server/Client application where clients will connect to the server. What I want to do, is make sure that the client connecting to the server is actually using my protocol and I can "trust" the data being sent from the client to the server. What I thought about doing is creating a sort of hash on the client's machine that follows a particular algorithm. What I did in a previous version was took their IP address, the client version, and a few other attributes of the client and sent it as a calculated hash to the server, who then took their IP, and the version of the protocol the client claimed to be using, and calculated that number to see if they matched. This works ok until you get clients that connect from within a router environment where their internal IP is different from their external IP. My fix for this was to pass the client's internal IP used to calculate this hash with the authentication protocol. My fear is this approach is not secure enough. Since I'm passing the data used to create the "auth hash". Here's an example of what I'm talking about: Client IP: 192.168.1.10, Version: 2.4.5.2 hash = 2*4*5*1 * (1+9+2) * (1+6+8) * (1) * (1+0) Client Connects to Server client sends: auth hash ip version Server calculates that info, and accepts or denies the hash. Before I go and come up with another algorithm to prove a client can provide data a server (or use this existing algorithm), I was wondering if there are any existing, proven, and secure systems out there for generating a hash that both sides can generate with general knowledge. The server won't know about the client until the very first connection is established. The protocol's intent is to manage a network of clients who will be contributing data to the server periodically. New clients will be added simply by connecting the client to the server and "registering" with the server. So a client connects to the server for the first time, and registers their info (mac address or some other kind of unique computer identifier), then when they connect again, the server will recognize that client as a previous person and associate them with their data in the database.

    Read the article

  • Database Consolidation Slides

    - by B R Clouse
    In case you missed us in the Demogrounds at Oracle OpenWorld-- or if you were there and would like to take another look -- here are the slides we were presenting last week:  Database Consolidation for Private Database Clouds. I'm thinking to add a voice-over ... once my voice recovers from four days of non-stop discussions, meetings, speaking sessions etc.  A few of the questions we answered frequently included: Q: Is it possible to deploy an Oracle Database Cloud today with Oracle's current technologies and products? A: Absolutely!  Oracle has been developing technologies for several years that support the key features of a cloud environment.  Oracle Database 11g is an ideal platform for database clouds. Q: Are Oracle Engineered Systems required for Oracle Database Clouds? A: Oracle Database Clouds run best on our Engineered Systems, but can also be deployed on any platform that supports the database, as many customers are doing today. If you have questions, feel free to post them here and we'll start a dialog.

    Read the article

  • Blog Rebranding

    I have been spending more and more time on learning as much as I can on Agile Development and also have been fairly immersed in rolling out TFS 2010 in our environment.  I feel like it is time to talk about some of my experiences.  With that, I am rebranding my blog to focus on these topics.  I am going to start with a bunch of blogs on the process I have gone through getting TFS 2010 configured for our development teams. Last week, Brian Harry was in our office and gave a great talk on the improved tools in TFS 2010 and how Microsoft uses the tools internally.  I followed that up with a high-level overview of the improved out of the box process templates and the process to customize them.  I am definitely very excited about the new features in 2010 and hopefully will keep up my motivation to blog about it.  I am writing my first post right now about the process I went through to build a task progress report based on the user story progress report in the MSF for Agile Development template.  Stay tunedDid you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • What's wrong performing unit test against concrete implementation if your frameworks are not going to change?

    - by palm snow
    First a bit of background: We are re-architecting our product suite that was written 10 years ago and served its purpose. One thing that we cannot change is the database schema as we have 500+ client base using this system. Our db schema has over 150+ tables. We have decided on using Entity Framework 4.1 as DAL and still evaluating various frameworks for storing our business logic. I am investigation to bring unit testing into the mix but I also confused as to how far I need to go with setting up a full blown TDD environment. One aspect of setting up unit testing is by getting into implementing Repository, unit of work and mocking frameworks etc. This mean there will be cost and investment on the code-bloat associated with all these frameworks. I understand some of this could be auto-generated but when it comes to things like behaviors, that will be mostly hand written. Just to be clear, I am not questioning the important of unit testing your code. I am just not sure we need all its components (like repository, mocking etc.) when we are fairly certain of storage mechanism/framework (SQL Server/Entity Framework). All that code bloat with generic repositories make sense when you need a generic layers with ability to change this whenever you like however its very likely a YAGNI in our case. What we need is more of integration testing where we can unit-test our code with concrete repository objects and test data in database. In this scenario, just running integration test seem to be more beneficial in our case. Any thoughts if I am missing any thing here?

    Read the article

  • MySQL documentation writer wanted

    - by stefanhinz
    As MySQL is thriving and growing, we're looking for an experienced technical writer located in Europe or North America to join the MySQL documentation team.For this job, we need the best and most dedicated people around. You will be part of a geographically distributed documentation team responsible for the technical documentation of all MySQL products. Team members are expected to work independently, requiring discipline and excellent time-management skills as well as the technical facilities to communicate across the Internet.Candidates should be prepared to work intensively with our engineers and support personnel. The overall team is highly distributed across different geographies and time zones. Our source format is DocBook XML. We're not just writing documentation, but also handling publication. This means you should be familiar with DocBook, and willing to learn our publication infrastructure.Candidates should therefore be interested not just in writing but also in the technical aspects of publishing documentation. Regarding your initial areas of authoring, those would be MySQL Cluster, MySQL Enterprise Monitor and Backup, and various parts of the MySQL server documentation (also known as the MySQL Reference Manual). This means you should be familiar with MySQL in general, and preferably also with MySQL Cluster and the MySQL Enterprise offerings.Other qualifications: Native English speaker 3 or more years previous experience in writing software documentation Excellent written and oral communication skills Ability to provide (online) samples of your work, e.g. books or articles Curiosity to learn new technologies Familiarity with distributed working environments and versioning systems such as SVN Comfortable with working on multiple operating systems, particularly Windows, Mac OS X, and Linux Ability to administer own workstations and test environment If you're interested, contact me under [email protected].

    Read the article

  • How often do you review fundamentals?

    - by mlnyc
    So I've been out of school for a year and a half now. In school, of course we covered all the fundamentals: OS, databases, programming languages (i.e. syntax, binding rules, exception handling, recursion, etc), and fundamental algorithms. the rest were more in-depth topics on things like NLP, data mining, etc. Now, a year ago if you would have told me to write a quicksort, or reverse a singly-linked list, analyze the time complexity of this 'naive' algorithm vs it's dynamic programming counterpart, etc I would have been able to give you a decent and hopefully satisfying answer. But if you would have asked me more real world questions I might have been stumped (things like how would handle logging for an application, or security difference between GET and POST, differences between SQL Server and Oracle SQL, anything I list on my resume as currently working with [jQuery questions, ColdFusion questions, ...] etc) Now, I feel things are the opposite. I haven't wrote my own sort since graduating, and I don't really have to worry much about theoretical things that do not naturally fall into problems I am trying to solve. For example, I might give you some great SQL solutions using an analytical function that I would have otherwise been stumped on or write a cool web application using angular or something but ask me to write an algo for insertAfter(Element* elem) and I might not be able to do it in a reasonable time frame. I guess my question here to the experienced programmers is how do you balance the need to both learn and experiment with new technologies (fun!), working on personal projects (also fun!) working and solving real world problems in a timeboxed environment (so I might reach out to a library that does what I want rather than re-invent the wheel so that I can focus on the problem I am trying to solve) (work, basically), and refreshing on old theoretical material which is still valid for interviews and such (can be a drag)? Do you review older material (such as famous algorithms, dynamic programming, Big-O analysis, locking implementations) regularly or just when you need it? How much time do you dedicate to both in your 'deliberate practice' and do you have a certain to-do list of topics that you want to work on?

    Read the article

  • How to avoid the GameManager god object?

    - by lorancou
    I just read an answer to a question about structuring game code. It made me wonder about the ubiquitous GameManager class, and how it often becomes an issue in a production environment. Let me describe this. First, there's prototyping. Nobody cares about writing great code, we just try to get something running to see if the gameplay adds up. Then there's a greenlight, and in an effort to clean things up, somebody writes a GameManager. Probably to hold a bunch of GameStates, maybe to store a few GameObjects, nothing big, really. A cute, little, manager. In the peaceful realm of pre-production, the game is shaping up nicely. Coders have proper nights of sleep and plenty of ideas to architecture the thing with Great Design Patterns. Then production starts and soon, of course, there is crunch time. Balanced diet is long gone, the bug tracker is cracking with issues, people are stressed and the game has to be released yesterday. At that point, usually, the GameManager is a real big mess (to stay polite). The reason for that is simple. After all, when writing a game, well... all the source code is actually here to manage the game. It's easy to just add this little extra feature or bugfix in the GameManager, where everything else is already stored anyway. When time becomes an issue, no way to write a separate class, or to split this giant manager into sub-managers. Of course this is a classical anti-pattern: the god object. It's a bad thing, a pain to merge, a pain to maintain, a pain to understand, a pain to transform. What would you suggest to prevent this from happening?

    Read the article

  • VMware Player 5.0 or VMware Workstation 9.0 after upgrade to Ubuntu 12.10

    The upgrade process Upgrading Ubuntu 12.04 to latest version 12.10 - aka Quantal Quetzal - is straight forward and you only need to follow the offical upgrade instructions. Short version on the console looks like this: sudo do-release-upgrade This will update the repository entries, and start the upgrade process. After some minutes or hours of download and installation, you have to reboot your system once to get the new kernel loaded. As time of writing, I'm on '3.5.0-17-generic'. And as with any modification of the kernel version, you have to compile the necessary kernel modules to get VMware Player or Workstation up and running. Usually, this happens the first time you try start your VMware software and that's it. Well, again not so this time. Getting the kernel patch Luckily, the community over VMware is very active and you can get a new kernel patch in the online forums here. Get the download and put in a folder have write permissions. Then you extract the archive on the console like so: tar -xjvf vmware9_kernel35_patch.tar.bz2 Then you change into the newly created folder: cd vmware9_kernel3.5_patch/ And you execute the available shell script as root (superuser) like so: sudo ./patch-modules_3.5.0.sh This will stop any running instances of VMware software, patches the source files and runs the compile process for your active environment. This might take some time depending on your machine, and once completed you can start VMware Player or Workstation as previously. In case that you are going to apply the patch again, the script will simply quit with the following output: /usr/lib/vmware/modules/source/.patched found. You have already patched your sources. Exiting You might remove the .patched file in case that you upgraded/changed your kernel and you need to apply the patch again. Disclaimer: The patch is "as-is" and the patcher is originally created by Artem S. Tashkinov, and later modified by An_tony. Please refer to the VMware forum in case of questions or problems. There are also patches available for older versions of VMware Player or Workstation.

    Read the article

  • How does Game Salad compare with Cocos2D in terms of 2D game development?

    - by jih
    I want to make some 2d games for iOS. I first come across cocos2d and kobold but then wanted something more graphical for rapid prototyping. I then found Game Maker which doesn't support iOS but is fairly easy to learn and then found Game Salad which supports iOS as well as other platforms. I know this question has been ask before but I want to know in terms of the types of games I want to develop what an learning investment path would be best. The types of games genre I am interest are: Side scrollers Simple games like diamond dash or ninja fruits, shanghai, etc Old fashioned zelda or dragonquest type (nintendo fan here:-) 2d adventure RPG games (real time or turn based) Mystery turn based games like carmen sandiego, wizardry, myst etc. So now the question becomes for me becomes which game development environment should I invest my time in learning. Game Salad or cocos2d? To make that decision I was wondering if experienced game programmers and help with some pointers: It would seem game salad would be great for quickies being graphical but in terms of 2d platform games etc would there be speed/performance/feature penalties? Are there certain 2d games genre of the 4 above that Game salad is better at while certain type cocos2d would be better at?

    Read the article

  • Is it possible to keep only one Database for both web and desktop applications?

    - by B4NZ41
    I'm experiencing a trouble with my business model, let me explain better. I'm developing a software for 1 year and few months, it's for the food industry, more exactly a software to: Delivery, Take Way, Table Reservation, POS, Accounts Payable and Receivable, Prints(receipt), Kitchen Monitors Orders, Customers Orders Control and Fiscal Area. Well, I had separated the software mainly in two areas, one is web area and the other is desktop area (Used by Admins only) and local installed. 1 - Web Area (Basically do the follow:) Show Catalog with the products Customers Make Orders Customers Pay for the Orders etc ... as mentioned above 2 - Desktop Area Manage Orders Manage Customers Manage Suppliers Manage Accounts Payable and Receivable etc ... as mentioned above The web area is hosted in an online web server (scripts and database are online). The Desktop area is hosted locally in a Linux machine with a local database and local scripts files. My question is: Is it possible to keep only one Database for both applications? If YES, please what is the best approach? Follow my technical specification environment Database: Actually I have two databases working and I would love to keep only one. Operating System: Linux (Kernel 2.6.X and above) or Windows (XP and above) For the Web Area Apache, PHP, Python, Java Script, Shell Script and MySQL. For the Desktop Area: PHP-GTK2, Apache, PHP, MySQL and Shell Script.

    Read the article

  • Is it a good idea to dynamically position and size controls on a form or statically set them?

    - by CrystalBlue
    I've worked mostly with interface building tools such as xCode's Interface Builder and Visual Studio's environment to place forms and position them on screens. But I'm finding that with my latest project, placing controls on the form through a graphical interface is not going to work. This more has to do with the number of custom controls I have to create that I can't visually see before hand. When I first tackled this, I began to position all of my controls relative to the last ones that I created. Doing this had its own pros and cons. On the one hand, this gave me the opportunity to set one number (a margin for example) and when I changed the margin, the controls all sized correctly to one another (such as shortening controls in the center while keeping controls next to the margin the same). But this started to become a spiders-web of code that I knew wouldn't go very far before getting dangerous. Change one number and everything re sizes, but remove one control and you've created many more errors and size problems for all the other controls. It became more surgery then small changes to controls and layout. Is there a good way or maybe a preferred way to determine when I should be using relative or absolute positioning in forms?

    Read the article

  • Securing a Cloud-Based Data Center

    - by Orgad Kimchi
    No doubt, with all the media reports about stolen databases and private information, a major concern when committing to a public or private cloud must be preventing unauthorized access of data and applications. In this article, we discuss the security features of Oracle Solaris 11 that provide a bullet-proof cloud environment. As an example, we show how the Oracle Solaris Remote Lab implementation utilizes these features to provide a high level of security for its users. Note: This is the second article in a series on cloud building with Oracle Solaris 11. See Part 1 here.  When we build a cloud, the following aspects related to the security of the data and applications in the cloud become a concern: • Sensitive data must be protected from unauthorized access while residing on storage devices, during transmission between servers and clients, and when it is used by applications. • When a project is completed, all copies of sensitive data must be securely deleted and the original data must be kept permanently secure. • Communications between users and the cloud must be protected to prevent exposure of sensitive information from “man in a middle attacks.” • Limiting the operating system’s exposure protects against malicious attacks and penetration by unauthorized users or automated “bots” and “rootkits” designed to gain privileged access. • Strong authentication and authorization procedures further protect the operating system from tampering. • Denial of Service attacks, whether they are started intentionally by hackers or accidentally by other cloud users, must be quickly detected and deflected, and the service must be restored. In addition to the security features in the operating system, deep auditing provides a trail of actions that can identify violations,issues, and attempts to penetrate the security of the operating system. Combined, these threats and risks reinforce the need for enterprise-grade security solutions that are specifically designed to protect cloud environments. With Oracle Solaris 11, the security of any cloud is ensured. This article explains how.

    Read the article

  • decouple software components via nameconvention

    - by csteinmueller
    I'm currently evaluating alternatives to refactor a drivermanagement. In my multitier architecture I have Baseclass DAL.Device //my entity Interfaces BL.IDriver //handles the dataprocessing between application and device BL.IDriverCreator //creates an IDriver from a Device BL.IDriverFactory //handles the driver creation requests Every specialization of Device has a corresponding IDriver implementation and a corresponding IDriverCreator implementation. At the moment the mapping is fix via a type check within the business layer / DriverFactory. That means every new driver needs a) changing code within the DriverFactory and b) referencing the new IDriver implementation / assembly. On a customers point of view that means, every new driver, used or not, needs a complex revalidation of their hardware environment, because it's a critical process. My first inspiration was to use a caliburn micro like nameconvention see Caliburn.Micro: Xaml Made Easy BL.RestDriver BL.RestDriverCreator DAL.RestDevice After receiving the RestDevicewithin the IDriverFactory I can load all driver dlls via reflection and do a namesplitting/comparing (extracting the xx from xxDriverCreator and xxDevice) Another idea would be a custom attribute (which also leads to comparing strings). My question: is that a good approach above layer borders? If not, what would be a good approach?

    Read the article

  • How similiar should the environments of PreProd and Prod be?

    - by RoboShop
    I've just recently been on a project and during the release, we realized that it didn't work in Production. It works in all other environments but because we have a separate release team, and we cannot set up the servers and environments ourselves, we have no visibility of the configuration on them. We suspect that Prod has some user permissions in its account or IIS settings that are different, so we are working though it now. So I think this whole thing has been a learning experience for me and I don't want the same thing repeated again. I would like to ask, how different should these environments be? I always thought that PreProd should be an identical copy to the Prod environment using a copy of the same database, using a copy of the same user account, should be installed on the same servers etc. But how far should I take it? If the web site is externally facing, should PreProd be externally facing? What if the website has components that don't require a user account or password to navigate to? Is it still okay to expose it to the outside world?

    Read the article

  • One of my VMs went boom using Virtual Box and how it got fixed

    - by Enrique Lima
    I am running an HP Envy 15, 16GB and 500GB (7200 RPM) Hard drive. Had a VM configured from another environment, created the virtual machine config file on Virtual Box, everything seemed ok. Fired it up, and it was  s   l   o   w, it took close to 10 minutes for it to load, and about 5 more to see Windows was in the process of loading before the BSOD.  Thought, maybe, just maybe it will not happen again … oh was I wrong. Frustration had already hit an all time high with this configuration and the number of issues I’ve had. How I did the troubleshooting … The best thing to do (IMO) is to step back, and gather your tools to debug this situation. Tools:  Virtual Box command line tools, Windows Debug. Virtual Box comes with a pretty good set of tools to examine, migrate and overall tasks to deal with VMs. The firs step:  use VBoxManage to prevent the VM from rebooting after the error to get enough time to really dig into the BSOD issue. Command used:   VBoxManage setextradata VMNAME "VBoxInternal/PDM/HaltOnReset" 1 Once this was done, the error reported was an “Inaccessible boot device” coming from a “Stop – 7B” type of error on the BSOD. The issue I had with this, my VM was configured to use a virtual SATA controller, and thought Windows 2008 R2 would handle this fine … again wrong!  Because the integration tools from the other product where wanting to take effect that was throwing everything off. The fix The fix was almost handed to me, edited the configuration for the VM, removed the SATA controller from it, added the virtual hard drive under an IDE controller, boot up and voilà … it works! I was then able to install the Virtual Box guest tools and such, but have decided to favor “keep on working” over “let’s try SATA again”

    Read the article

  • Has anyone used game salad before and how does it compare with cocos2d in terms of 2d game development

    - by jih
    First a short intro. I am new to the game development space and want to make some 2d games for iOS. I first come across cocos2d and kobold but then wanted something more graphical for rapid prototyping. I then found Game Maker which doesn't support iOS but is fairly easy to learn and then found Game Salad which supports iOS as well as other platforms. I know this question has been ask before but I want to know in terms of the types of games I want to develop what an learning investment path would be best. The types of games genre I am interest are: Side scrollers Simple games like diamond dash or ninja fruits, shanghai, etc Old fashioned zelda or dragonquest type (nintendo fan here:-) 2d adventure RPG games (real time or turn based) Mystery turn based games like carmen sandiego, wizardry, myst etc. So now the question becomes Which game development environment should I invest my time in learning. Game Salad or cocos2d? It would seem game salad would be great for quickies being graphical but in terms of 2d platform games etc would there be speed/performance/feature penalties? Are there certain 2d games genre of the 4 above that Game salad is better at while certain type cocos2d would be better at? Anyone with experience of both can share some pointers? Thanks. inexperienced jih

    Read the article

< Previous Page | 423 424 425 426 427 428 429 430 431 432 433 434  | Next Page >