Search Results

Search found 11219 results on 449 pages for 'developer skills'.

Page 363/449 | < Previous Page | 359 360 361 362 363 364 365 366 367 368 369 370  | Next Page >

  • Build Dependencies and Silverlight 4

    - by Kyle Burns
    At my current position, I’ve been doing quite a bit of Silverlight development and have also been working with TFS2010 build services to enable continuous integration.  One of the critical pieces of a successful continuous build setup (and also one of the benefits of having one) is that the build system should be able to “get latest” against the source repository and immediately build with no errors.  This can break down both in an automated build scenario and a “new guy” scenario when the solution has external dependencies that may not be present in the build environment. The method that I use to address the dependency issue is to store all of the binaries upon which my solution depends in a folder under the solution root called “Reference Items”.  I keep this folder as part of the solution and check all of the binaries into source control so when I get the latest version of the solution from source control all of the binaries are downloaded to my machine as well and gets me closer to the ideal where a new developer installs the development IDE, get latest and can immediately build and run unit tests before jumping into coding the feature of the day. This all sounds pretty good (and it is), but a little while back I ran into one of those little hiccups that requires a little manual intervention.  The issue that I ran into is that with Silverlight (at least version 4), the behavior of the “Add Reference” command when adding reference to a DLL that is present in the GAC is to omit the HintPath element that it includes with regular .Net projects, so even if the DLL is setting in the Reference Items folder and downloaded to the build machine it cannot be found at compile time and the build will fail. To work around this behavior, you need to be comfortable editing the XML project files generated by Visual Studio (in my case this is typically a .csproj file).  Simply open the project file in your favorite text editor, find the Reference element that refers to the component, and modify the XML to include the HintPath.  Here’s a before and after example of the component that ultimately led me to the investigation behind this post: Before: <Reference Include="Telerik.Windows.Controls, Version=2011.2.920.1040, Culture=neutral, PublicKeyToken=5803cfa389c90ce7, processorArchitecture=MSIL" /> After: <Reference Include="Telerik.Windows.Controls, Version=2011.2.920.1040, Culture=neutral, PublicKeyToken=5803cfa389c90ce7, processorArchitecture=MSIL">       <HintPath>..\Reference Items\Telerik.Windows.Controls.dll</HintPath>     </Reference>

    Read the article

  • Investment scheme for a PC game the project

    - by Alex Kamen
    Good day everyone, I am working on a PC game project that has 3 phases planned, micro, macro and mmo versions [if confused, see a brief description at the bottom]. I have found a potential investor for the micro version of the game, but naturally, he requested a detailed plan of how the game will pay back. And the problem is that micro version itself is not supposed to be monetized much, other than some ads and limited in-game currency utilization. The idea is that with this combat demo already at hand, it should be possible to get a really large enough investment (millions of dollars) and use it to pay back the initial small one (thousands of dollars) and take the project into macro phase, which will really make profit. This way, everybody is going to win, provided that I can deliver the end-product. Yet while I am confident of that both the conception of the macro and the real game-play of the micro versions are going to be appealing, I don’t know how to obtain any guarantee of that I will be able to get funded once I have the prototype ready. And without that, I won’t receive the funds for the prototype in the first place! To summarize, my question is: how to figure out my future possibilities of getting funded once I have combat demo out, basically “whom to write to and what”. Ideally, I would like some sort of a preliminary agreement with a game publisher, something that would basically state “If the developer provides the product in time and in quality corresponding to the specifications given, the publisher guarantees to allocate funds for distribution and further development, thereby acquiring the right to X part of all future profits”. Does this sound sane? It’s just that I don’t want to sell all of my rights out straight away by taking a big outside investment while the project is in such early stage. I would appreciate if you would share your thoughts on this kind of scheme, and be sure to ask questions as I am sure I must have forgotten to mention a ton of important things, like the fact that initial funds are going to be spent on outsourcing (living in Siberia is really just great). [here’s a brief outline of what each version will feature] [micro] 1) turn based tactical combat rules 2) character development 3) arena/tournament system [macro] 4) ai-ruled dynamic interactive worlds 5) global map adventuring 6) strategic rpg + god simulator gameplay [mmo] 7) Persistent worlds system 8) Social structures system (“guilds/clans”) 9) god-simulation on the mmo scale P.S. Obviously, these features are incremental, so that mmo version has all 9.

    Read the article

  • How do I make the launcher progress bar work with my application?

    - by Kevin Gurney
    Background Research I am attempting to update the progress bar within the Unity launcher for a simple python/Gtk application created using Quickly called test; however, following the instructions in this video, I have not been able to successfully update the progress bar in the Unity launcher. In the Unity Integration video, Quickly was not used, so the way that the application was structured was slightly different, and the code used in the video does not seem to function properly without modification in a default Quickly ubuntu-application template application. Screenshots Here is a screenshot of the application icon as it is currently displayed in the Unity Launcher. Here is a screenshot of the kind of Unity launcher progress bar functionality that I would like (overlayed on mail icon: wiki.ubuntu.com). Code class TestWindow(Window): __gtype_name__ = "TestWindow" def finish_initializing(self, builder): # pylint: disable=E1002 """Set up the main window""" super(TestWindow, self).finish_initializing(builder) self.AboutDialog = AboutTestDialog self.PreferencesDialog = PreferencesTestDialog # Code for other initialization actions should be added here. self.add_launcher_integration() def add_launcher_integration(self): self.launcher = Unity.LauncherEntry.get_for_desktop_id("test.destkop") self.launcher.set_property("progress", 0.75) self.launcher.set_property("progress_visible", True) Expected Behavior I would expect the above code to show a progress bar that is 75% full overlayed on the icon for the test application in the Unity Launcher, but the application only runs and displays no progress bar when the command quickly run is executed. Problem Investigation I believe that the problem is that I am not properly getting a reference to the application's main window, however, I am not sure how to properly fix this problem. I also believe that the line: self.launcher = Unity.LauncherEntry.get_for_desktop_id("test.destkop") may be another source of complication because Quickly creates .desktop.in files rather than ordinary .desktop files, so I am not sure if that might be causing issues as well. Perhaps, another source of the issue is that I do not entirely understand the difference between .desktop and .desktop.in files. Does it possibly make sense to make a copy of the test.desktop.in file and rename it test.desktop, and place it in /usr/share/applications in order for get_for_desktop_id("test,desktop") to reference the correct .desktop file? Related Research Links Although, I am still not clear on the difference between .desktop and .desktop.in files, I have done some research on .desktop files and I have come across a couple of links: Desktop Entry Files (library.gnome.org) Desktop File Installation Directory (askubuntu.com) Unity Launcher API (wiki.ubuntu.com) Desktop Files: putting your application in the desktop menus (developer.gnome.org) Desktop Menu Specification (standards.freedesktop.org)

    Read the article

  • Remote Task Flow vs. WSRP Portlets

    - by Frank Nimphius
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} A remote task flow is bounded task flow that is deployed as a stand-alone Java EE application on a remote server with its URL Invoke property set to url-invoke-allowed. The remote task flow is accessed either from a direct browser GET request or, when called from another ADF application, through the task flow call activity. For more information about how to invoke remote task flows from a task flow call activity see chapter 15.6.4 How to Call a Bounded Task Flow Using a URL of the Oracle Fusion Middleware Fusion Developer's Guide for Oracle Application Development Framework at http://docs.oracle.com/cd/E23943_01/web.1111/b31974/taskflows_activities.htm#CHDJDJEF Compared to WRSP portlets, remote task flows in Oracle JDeveloper 11g R1 and R2 have a functional limitation in that they cannot be embedded as a region on a page but require the calling ADF application to navigate off to another application and page. The difference between a remote task flow call using the task flow call activity and a simple redirect to a remote Java EE application is that the remote task flow has a state token attached that allows to restore the state of the calling application upon task flow return. A use case for a remote task flow call activity is a "yellow page lookup" scenario in which different ADF applications use an remote task flow to lookup people, products or similar to return a selected value to the calling application. Note that remote task flow calls need to be performed from a bounded or unbounded top level task flow of the calling application. If called from a region (using the parent call activity) in a page, the region state is not recovered upon task flow return. ADF developers recently have identified remote task flows as an architecture pattern to partition their ADF applications into independently deployed Java EE applications. While this sounds like a desirable use of the remote task flow feature, it is not possible to achieve for as long as remote task flows don't render as an ADF region.

    Read the article

  • Suggestions for connecting .NET WPF GUI with Java SE Server aoo

    - by Sam Goldberg
    BACKGROUND We are building a Java (SE) trading application which will be monitoring market data and sending trade messages based on the market data, and also on user defined configuration parameters. We are planning to provide the user with a thin client, built in .NET (WPF) for managing the parameters, controlling the server behavior, and viewing the current state of the trading. The client doesn't need real-time updates; it will instead update the view once every few seconds (or whatever interval is configured by the user). The client has about 6 different operations it needs to perform with the server, for example: CRUD with configuration parameters query subset of the data receive updates of current positions from server It is possible that most of the different operations (except for receiving data) are just different flavors of managing the configuration parameters, but it's too early in our analysis for us to be sure. To connect the client with the server, we have been considering using: SOAP Web Service RESTful service building a custom TCP/IP based API (text or xml) (least preferred - but we use this approach with other applications we have) As best as I understand, pros and cons of the different web service flavors are: SOAP pro: totally automated in .NET (and Java), modifying server side interface require no code changes in communication layer, just running refresh on Web Service reference to regenerate the classes. con: more overhead in the communication layer sending more text, etc. We're not using J2EE container so maybe doesn't work so well with J2SE REST pro: lighter weight, less data. Has good .NET and Java support. (I don't have any real experience with this, so don't know what other benefits it has.) con: client will not be automatically aware if there are any new operations or properties added (?), so communication layer needs to be updated by developer if server interface changes. con: (both approaches) Server cannot really push updates to the client at regular intervals (?) (However, we won't mind if client polls the server to get updates.) QUESTION What are your opinions on the above options or suggestions for other ways to connect the 2 parts? (Ideally, we don't want to put much work into the communication layer, because it's not the significant part of the application so the more off-the-shelf and automated the better.)

    Read the article

  • Endeca Information Discovery 3-Day Hands-on Training Workshop

    - by Mike.Hallett(at)Oracle-BI&EPM
    For Oracle Partners, on October 3-5, 2012 in Milan, Italy: Register here. Endeca Information Discovery plays a key role with your big data analysis and complements Oracle Business Intelligence Solutions such as OBIEE. This FREE hands-on workshop for Oracle Partners highlights technical know-how of the product and helps understand its value proposition. We will walk you through four key components of the product: Oracle Endeca Server—A highly scalable, search-analytical database that derives the data model based on the data presented to it, thereby reducing data modeling requirements. Studio—A highly interactive, component-based user interface for configuring advanced, yet intuitive, analytical applications. Integration Suite—Provides rapid unification and enrichment of diverse sources of information into a single integrated view. Extensible Value-Added Modules—Add-on modules that provide value quickly through configuration instead of custom coding. Topics covered will include Data Exploration with Endeca Information Discovery, Data Ingest, Project Lifecycle, Building an Endeca Server data model and advanced modeling techniques, and Working with Studio. Lab Outline The labs showcase Oracle Endeca Information Discovery components and functionality by providing expertise on features and know-how of building such applications. The hands-on activities are based on a Quick Start application provided during the class. Audience Oracle Partners, Big Data Analytics Developer and Architects BI and EPM Application Developers and Implementers, Data Warehouse Developers Equipment Requirements This workshop requires attendees to provide their own laptops for this class. Attendee laptops must meet the following minimum hardware/software requirements: Hardware 8GB RAM is highly recommended (Windows 64 bit Machine is required) 40 GB free space (includes staging) USB 2.0 port (at least one available) Software One of the following operating systems: 64-bit Windows host/laptop OS (Windows 7 or Windows Server 2008) 64-bit host/laptop OS with a Windows VM (Server, or Win 7, BIC2g, etc.) Internet Explorer 8.x , Firefox 3.6 or Firefox 6.0 WINRAR or 7ziputility to unzip workshop files: Download-able from http://www.win-rar.com/download.html Download-able from http://www.7zip.com/ Oracle Endeca Information Discovery Workshop Register here: October 3-5, 2012: Cinisello Balsamo, Milan.  We will confirm with you your place within 2 weeks. Questions?  Send email to: [email protected]  :  Oracle Platform Technologies Enablement Services.

    Read the article

  • The Case for Complimentary Software Copies

    - by GGBlogger
    As the Geriatric Geek you can understand that I’ve been writing and studying for over 60 years. That means that I’ve seen insane changes in the computer software industry. I’ve made the joke that I get a new college education every 6 months or so. Of course that’s an exaggeration but it doesn’t make the feeling go away. I have a long standing and strong relationship with Microsoft so I’m armed with virtually every tool they make. It also means that I have access to tons of training material. But here’s the rub… Last year I started a definitive read of Professional Visual Basic 2008. The purpose was to fill in holes in my understanding of various things. I’m currently on page 1119 of some 1400 pages. During this sojourn I’ve decided that the future is web related which is to say that the future of “thick client” applications running as Windows applications is likely to slowly disappear. To that end I’ve taken a side trip or two into the world of ASP (including XML), Silverlight and cloud development. After carefully avoiding (that’s tongue in cheek) XML for years I finally had to bite the bullet, so to speak, and start learning XML in earnest. The most recent result of that was trail downloads of Altova’s MissionKit 2010 for Software Architects and Liquid Technologies Liquid XML Studio Developer Edition. These are both beautiful products and I want to learn them and write about them. Now comes the rub… While 30 day evaluations are generous in allowing casual users to assess these technologies for purchase they are NOT long enough to allow an author to evaluate, learn and ultimately write about them. Even if I devoted the full 30 days to learning, using and writing about say Altova’s suite I wouldn’t have enough time. Liquid XML may be a little easier to learn (one product as opposed to 8).  Add to that the fact that I frequently get sidetracked to add to my kit and it really blows out. It can be extremely frustrating when I’ve devoted hours to a project and suddenly discover that to complete it I will either need to purchase a license or abandon the project. Since my life blood does not depend on the product I end up abandoning the project and moving on. So to the folks from whom I request complimentary copies… I guarantee that if I convert your product to doing paid development work I will purchase a license to do that but as long as I am using your product to study for the purpose of writing samples, teaching use or otherwise promoting your product to other paying customers I will ask that you give me a license so that I can do that without facing the dread expiration of a 30 day trial.

    Read the article

  • Solution for developers wanting to run a standalone WLS 10.3.6 server against JDev 11.1.1.6.0

    - by Chris Muir
    In my previous post I discussed how to install the 11.1.1.6.0 ADF Runtimes into a standalone WLS 10.3.6 server by using the ADF Runtime installer, not the JDeveloper installer.  Yet there's still a problem for developers here because JDeveloper 11.1.1.6.0 comes coupled with a WLS 10.3.5 server.  What if you want to develop, deploy and test with a 10.3.6 server?  Have we lost the ability to integrate the IDE and the WLS server where we can run and stop the server, deploy our apps automatically the server and more? JDeveloper actually solved this issue sometime back but not many people will have recognized the feature for what it does as it wasn't needed until now. Via the Application Server Navigator you can create 2 types of connections, one to a remote "standalone WLS" and another to an "integrated WLS".  It's this second option that is useful because what we can do is install a local standalone WLS 10.3.6 server on our developer PC, then create a separate "integrated WLS" connection to the standalone server.  Then by accessing your Application's properties through the Application menu -> Application Properties -> Run -> Bind to Integration Application Server option we can choose the newly created WLS server connection to work with our application. In this way JDeveloper will now treat the new server as if it was the integrated WLS.  It will start when we run and deploy our applications, terminate it at request and so on.  Of course don't forget you still need to install the ADF Runtimes for the server to be able to work with ADF applications. Note there is bug 13917844 lurking in the Application Server Navigator for at least JDev 11.1.1.6.0 and earlier.  If you right click the new connection and select "Start Server Instance" it will often start one of the other existing connections instead (typically the original IntegratedWebLogicServer connection).  If you want to manually start the server you can bypass this by using the Run menu -> Start Server Instance option which works correctly.

    Read the article

  • ArchBeat Link-o-Rama for 2012-04-05

    - by Bob Rhubart
    Webcast: Oracle Maximum Availability Architecture Best Practices event.on24.com Date: Thursday, April 12, 2012 Time: 10:00 AM PDT Oracle expert Tom Kyte discusses how Oracle’s Maximum Availability Architecture can help to minimize the costs and risk of downtime. Oracle Enterprise Manager Ops Center 12c Launch - Interactive Webcast and Live Chat www.oracle.com Thursday, April 12, 2012. 9 a.m. PT / 12 p.m. ET / 4 p.m. GMT. Speakers: Steve Wilson (VP Systems Management, Oracle) John Fowler (Exec VP Systems, Oracle) Brad Cameron (VP Development, Oracle Fusion Middleware) Bill Nesheim (VP Oracle Solaris) Dennis Reno (VP Customer Portal Experience, Oracle) Mike Wookey (Chief Architect, Oracle Enterprise Manager Ops Center) Prasad Pai (Sr Director, Oracle Enterprise Manager Ops Center) 2012 Real World Performance Tour Dates |Performance Tuning | Performance Engineering www.ioug.org Coming to your town: a full day of real world database performance with Tom Kyte, Andrew Holdsworth, and Graham Wood. Rochester, NY - March 8 Los Angeles, CA - April 30 Orange County, CA - May 1 Redwood Shores, CA - May 3 Oracle Technology Network Developer Day: MySQL - New York www.oracle.com Wednesday, May 02, 2012 8:00 AM – 4:30 PM Grand Hyatt New York 109 East 42nd Street, Grand Central Terminal New York, NY 10017 Webcast Series: Data Warehousing Best Practices event.on24.com April 19, 2012 - Best Practices for Workload Management of a Data Warehouse on Oracle Exadata May 10, 2012 - Best Practices for Extreme Data Warehouse Performance on Oracle Exadata How to create a Global Rule that stores a document’s folder path in a custom metadata field | Nicolas Montoya blogs.oracle.com An illustrated how-to from Oracle Fusion Middleware A-Team blogger Nicolas Montoya. Get Proactive with Fusion Middleware | Daniel Mortimer blogs.oracle.com Daniel Mortimer shows how to access "a one stop shop for navigating to proactive support material, tools, and communication channels related to Oracle Fusion Middleware." Build an enterprise on 'other peoples' work', via SOA and cloud | Joe McKendrick www.zdnet.com Are you down with OPW? Joe McKendrick's synopsis of a recent presentation by David Linthicum focuses on reuse. Oracle Fusion Middleware Security: Unsolicited login with OAM 11g | Chris Johnson fusionsecurity.blogspot.com Chris Johnson shows how to create a shopping cart login model using "plain old HTML." How to use the Human WorkFlow Web Services | Edwin Biemond biemond.blogspot.com Oracle ACE Edwin Biemond shows how to invoke two WorkFlow web services to query the Human task in Oracle SOA Suite with your own ordering and restrictions. Bad Practice Use Case for LOV Performance Implementation in ADF BC | Andrejus Baranovskis andrejusb.blogspot.com "If you want to learn something well, there is nothing better [than] to learn bad practices first," says Oracle ACE Director Andrejus Baranovskis. Thought for the Day "The best meetings get real work done. When your people learn that your meetings actually accomplish something, they will stop making excuses to be elsewhere." — Larry Constantine

    Read the article

  • Why CoffeeScript is tough to maintain

    - by Renso
    I recently started trying out CoffeeScript only to find out that it caused more headaches. The abstraction level of jQuery was perfect, it did not dictate to coders how to design their code, it just works. However, I recently posted a request to the CoffeeScript team to consider introducing curly braces to help with more complex code to control the flow of logic. For example a if-then-else with many nested levels can be near impossible to debug without tracing through it when using CoffeeScript. Also with IDEs like Visual Studio, regular JavaScript intellicense and auto-formatting make it easy to appropriate indent nested levels without any work on the part of the developer and reading it is not that hard, especially with some extensions that show vertical lines in the code editor to help see what is nested within what part of the code.However with CoffeeScript that is not the case. The samples given in the CoffeeScript web site are of course just simple examples to explain the features and one gets excited pretty quick over the powerful shortcuts. I tried to convert a piece of JavaScript over to CoffeeScript and gave up since you need to first of all remove ALL non CoffeeScript coding constructs for it to even compile. However js2coffee can help with that. However to keep track of nested levels became something that was simply not manageable using CoffeeScript.Furthermore, any coding language that controls the flow of logic by indentation is extremely dangerous for obvious reasons. I liked CoffeeScript a lot, but the fact that the logical flow of the code is controlled by how much you indent code, spaces or tabs, is not reliable as there is no way the programmer has an easy way of knowing what parts of the code will get hit when the code spans a page.When I suggested introducing curly braces in CoffeeScript the team, one contributor advised me that my code needs to be re-designed! Needless to say that is absurd. When I included a piece of the code he asked my if it was legacy code. It's like saying to a Java programmer, sorry you cannot use Java because we don't agree with how you write your code.jashkenas from the CoffeeScript blog gave some great suggestions and made the point that introducing curly braces would be very problematic for them as they use them to denote objects. Makes sense, but I would still love to see some way to replace code flow control with spaces and indentation to something more concrete and human readable.

    Read the article

  • Enterprise Service Bus (ESB): Important architectural piece to a SOA or is it just vendor hype?

    Is an Enterprise Service Bus (ESB) an important architectural piece to a Service-Oriented Architecture (SOA), or is it just vendor hype in order to sell a particular product such as SOA-in-a-box? According to IBM.com, an ESB is a flexible connectivity infrastructure for integrating applications and services; it offers a flexible and manageable approach to service-oriented architecture implementation. With this being said, it is my personal belief that ESBs are an important architectural piece to any SOA. Additionally, generic design patterns have been created around the integration of web services in to ESB regardless of any vendor. ESB design patterns, according to Philip Hartman, can be classified in to the following categories: Interaction Patterns: Enable service interaction points to send and/or receive messages from the bus Mediation Patterns: Enable the altering of message exchanges Deployment Patterns: Support solution deployment into a federated infrastructure Examples of Interaction Patterns: One-Way Message Synchronous Interaction Asynchronous Interaction Asynchronous Interaction with Timeout Asynchronous Interaction with a Notification Timer One Request, Multiple Responses One Request, One of Two Possible Responses One Request, a Mandatory Response, and an Optional Response Partial Processing Multiple Application Interactions Benefits of the Mediation Pattern: Mediator promotes loose coupling by keeping objects from referring to each other explicitly, and it lets you vary their interaction independently Design an intermediary to decouple many peers Promote the many-to-many relationships between interacting peers to “full object status” Examples of Interaction Patterns: Global ESB: Services share a single namespace and all service providers are visible to every service requester across an entire network Directly Connected ESB: Global service registry that enables independent ESB installations to be visible Brokered ESB: Bridges services that are reluctant to expose requesters or providers to ESBs in other domains Federated ESB: Service consumers and providers connect to the master or to a dependent ESB to access services throughout the network References: Mediator Design Pattern. (2011). Retrieved 2011, from SourceMaking.com: http://sourcemaking.com/design_patterns/mediator Hartman, P. (2006, 24 1). ESB Patterns that "Click". Retrieved 2011, from The Art and Science of Being an IT Architect: http://artsciita.blogspot.com/2006/01/esb-patterns-that-click.html IBM. (2011). WebSphere DataPower XC10 Appliance Version 2.0. Retrieved 2011, from IBM.com: http://publib.boulder.ibm.com/infocenter/wdpxc/v2r0/index.jsp?topic=%2Fcom.ibm.websphere.help.glossary.doc%2Ftopics%2Fglossary.html Oracle. (2005). 12 Interaction Patterns. Retrieved 2011, from Oracle® BPEL Process Manager Developer's Guide: http://docs.oracle.com/cd/B31017_01/integrate.1013/b28981/interact.htm#BABHHEHD

    Read the article

  • Introducing Windows Azure Mobile Services

    - by Clint Edmonson
    Today I’m excited to share that the Windows Azure Mobile Services public preview is now available. This preview provides a turnkey backend cloud solution designed to accelerate connected client app development. These services streamline the development process by enabling you to leverage the cloud for common mobile application scenarios such as structured storage, user authentication and push notifications. If you’re building a Windows 8 app and want a fast and easy path to creating backend cloud services, this preview provides the capabilities you need. You to take advantage of the cloud to build and deploy modern apps for Windows 8 devices in anticipation of general availability on October 26th. Subsequent preview releases will extend support to iOS, Android, and Windows Phone. Features The preview makes it fast and easy to create cloud services for Windows 8 applications within minutes. Here are the key benefits:  Rapid development: configure a straightforward and secure backend in less than five minutes. Create modern mobile apps: common Windows Azure plus Windows 8 scenarios that Windows Azure Mobile Services preview will support include:  Automated Service API generation providing CRUD functionality and dynamic schematization on top of Structured Storage Structured Storage with powerful query support so a Windows 8 app can seamlessly connect to a Windows Azure SQL database Integrated Authentication so developers can configure user authentication via Windows Live Push Notifications to bring your Windows 8 apps to life with up to date and relevant information Access structured data: connect to a Windows Azure SQL database for simple data management and dynamically created tables. Easy to set and manage permissions. Pricing One of the key things that we’ve consistently heard from developers about using Windows Azure with mobile applications is the need for a low cost and simple offer. The simplest way to describe the pricing for Windows Azure Mobile Services at preview is that it is the same as Windows Azure Websites during preview. What’s FREE? Run up to 10 Mobile Services for free in a multitenant environment Free with valid Windows Azure Free Trial 1GB SQL Database Unlimited ingress 165MB/day egress  What do I pay for? Scaling up to dedicated VMs Once Windows Azure Free Trial expires - SQL Database and egress     Getting Started To start using Mobile Services, you will need to sign up for a Windows Azure free trial, if you have not done so already.  If you already have a Windows Azure account, you will need to request to enroll in this preview feature. Once you’ve enrolled, this getting started tutorial will walk you through building your first Windows 8 application using the preview’s services. The developer center contains more resources to teach you how to: Validate and authorize access to data using easy scripts that execute securely, on the server Easily authenticate your users via Windows Live Send toast notifications and update live tiles in just a few lines of code Our pricing calculator has also been updated for calculate costs for these new mobile services. Questions? Ask in the Windows Azure Forums. Feedback? Send it to [email protected].

    Read the article

  • Using multiple diagrams per model in Entity Framework 5.0

    - by nikolaosk
    I have downloaded .Net framework 4.5 and Visual Studio 2012 since it was released to MSDN subscribers on the 15th of August.For people that do not know about that yet please have a look at Jason Zander's excellent blog post .Since then I have been investigating the many new features that have been introduced in this release.In this post I will be looking into theIn order to follow along this post you must have Visual Studio 2012 and .Net Framework 4.5 installed in your machine.Download and install VS 20120 using this link.My machine runs on Windows 8 and Visual Studio 2012 works just fine. I have also installed in my machine SQL Server 2012 developer edition. I have also downloaded and installed AdventureWorksLT2012 database.You can download this database from the codeplex website.   Before I start showcasing the demo I want to say that I strongly believe that Entity Framework is maturing really fast and now at version 5.0 can be used as your data access layer in all your .Net projects.I have posted extensively about Entity Framework in my blog.Please find all the EF related posts here. In this demo I will show you how to split an entity model into multiple diagrams using the new enhanced EF designer. We will not build an application in this demo.Sometimes our model can become too large to edit or view.In earlier versions we could only have one diagram per EDMX file.In EF 5.0 we can split the model into more diagrams.1) Launch VS 2012. Express edition will work fine.2) Create a New Project. From the available templates choose a Web Forms application  3) Add a new item in your project, an ADO.Net Entity Data Model. I have named it AdventureWorksLT.edmx.Then we will create the model from the database and click Next.Create a new connection by specifying the SQL Server instance and the database name and click OK.Then click Next in the wizard.In the next screen of the wizard select all the tables from the database and hit Finish.4) It will take a while for our .edmx diagram to be created. When I select an Entity (e.g Customer) from my diagram and right click on it,a new option appears "Move to new Diagram".Make sure you have the Model Browser window open.Have a look at the picture below 5) When we do that a new diagram is created and our new Entity is moved there.Have a look at the picture below  6) We can also right-click and include the related entities. Have a look at the picture below. 7) When we do that the related entities are copied to the new diagram.Have a look at the picture below  8) Now we can cut (CTRL+X) the entities from Diagram2 and paste them back to Diagram1.9) Finally another great enhancement of the EF 5.0 designer is that you can change colors in the various entities that make up the model.Select the entities you want to change color, then in the Properties window choose the color of your choice. Have a look at the picture below. To recap we have demonstrated how to split your entity model in multiple diagrams which comes handy in EF models that have a large number of entities in them Hope it helps!!!!

    Read the article

  • What is the correct way to use g_signal_connect() in C++ for dynamic unity quicklists?

    - by hakermania
    I want to make my application use dynamic unity quicklists. For building my application I am using C++ and the QtCreator IDE. When a menu action is triggered I want to be able to have access to a non-static function of my MainWindow class so as to be able to update the Graphical User Interface which can be accessed from inside 'normal' MainWindow's functions. So, I am building up my quicklist like this (mainwindow.cpp): void MainWindow::enable_unity_quicklist(){ Unity_Menu = dbusmenu_menuitem_new(); dbusmenu_menuitem_property_set_bool (Unity_Menu, DBUSMENU_MENUITEM_PROP_VISIBLE, FALSE); Unity_Stop = dbusmenu_menuitem_new(); dbusmenu_menuitem_property_set(Unity_Stop, DBUSMENU_MENUITEM_PROP_LABEL, "Stop"); dbusmenu_menuitem_child_append (Unity_Menu, Unity_Stop); g_signal_connect (Unity_Stop, DBUSMENU_MENUITEM_SIGNAL_ITEM_ACTIVATED, G_CALLBACK(&fake_callback), (gpointer)this); if(!unity_entry) unity_entry = unity_launcher_entry_get_for_desktop_id("myapp.desktop"); unity_launcher_entry_set_quicklist(unity_entry, Unity_Menu); dbusmenu_menuitem_property_set_bool(Unity_Menu, DBUSMENU_MENUITEM_PROP_VISIBLE, true); dbusmenu_menuitem_property_set_bool(Unity_Stop, DBUSMENU_MENUITEM_PROP_VISIBLE, true); } void MainWindow::fake_callback(gpointer data){ MainWindow* m = (MainWindow*)data; m->on_stopButton_clicked(); } void MainWindow::on_stopButton_clicked(){ //stopping the process... } mainwindow.h: private slots: void enable_unity_quicklist(); void on_stopButton_clicked(); public slots: static void fake_callback(gpointer data); This suggestion was taken from http://old.nabble.com/Using-g_signal_connect-in-class-td18461823.html The program crashes immediately after I choose the 'Stop' action from the Unity Quicklist. Debugging the program shows that I am not able to access anything MainWindow related inside the on_stopButton_clicked() without crashing. For example, it crashes when doing this check (which is the first 2 lines of code inside this function): if (!ui->stopButton->isEnabled()) return; I have also tested lots of other things that I found at the internet, but nothing of them worked. One interesting solution would be to use gtkmm (http://developer.gnome.org/gtkmm-tutorial/stable/sec-connecting-signal-handlers.html.en) but I am not used at all working on GTK applications (I work solely in Qt) and I don't know if this even suits to my occasion. A compilable example indicating what the problem is can be found at: http://ubuntuone.com/7iKA3wnPmWVp8YNNDLlVQI (3.2Kb) If you are not familiar with the QtCreator IDE, you can compile with the following commands, as long as you have all the needed libraries: cd dynamic_unity_quicklists_test; qmake -project; qmake; make

    Read the article

  • How to Overcome or fix MIXED_DML_OPERATION error in Salesforce APEX without future method ?

    - by sathya
    How to Overcome or fix MIXED_DML_OPERATION error in Salesforce APEX without future method ?MIXED_DML_OPERATION :-one of the worst issues we have ever faced :)While trying to perform DML operation on a setup object and non-setup object in a single action you will face this error.Following are the solutions I tried and the final one worked out :-1. perform the 1st objects DML on normal apex method. Then Call the 2nd objects DML through a future method.    Drawback :- You cant get a response from the future method as its context is different and because its executing asynchronously and that its static.2. Tried the following option but it didnt work :-    1. perform the dml operation on the normal apex method.    2. tried calling the 2nd dml from trigger thinking that it would be in a different context. But it didnt work.    3. Some suggestions were given in some blogs that we could try System.runas()   Unfortunately that works only for test class.   4. Finally achieved it with response synchronously through the following solution :-    a. Created 2 apex:commandbuttons :-        1. <apex:commandButton value="Save and Send Activation Email" action="{!CreateContact}"  rerender="junkpanel" oncomplete="callSimulateUserSave()">            Note :- Oncomplete will not work if you dont have a rerender attribute. So just try refreshing a junk panel.        2. <apex:commandButton value="SimulateUserSave" id="SimulateUserSave" action="{!SaveUser}"  style="display:none;margin-left:5px;"/>        Have a junk panel as well just for rerendering  :-        <apex:outputPanel id="junkpanel"></apex:outputPanel>    b. Created this javascript function which is called from first button's oncomplete and clicks the second button :-                function callSimulateUserSave()                {                    // Specify the id of the button that needs to be clicked. This id is based on the Apex Component Hierarchy.                    // You will not get this value if you dont have the id attribute in the button which needs to be clicked from javascript                    // If you have any doubt in getting this value. Just hover over the button using Chrome developer tools to get the id.                    // But it will show like theForm:SimulateUserSave but you need to replace the colon with a dot here.                    // Note :- I have given display:none in the style of the second button to make sure that, it is not visible for the user.                    var mybtn=document.getElementById('{!$Component.theForm.SimulateUserSave}');                                    mybtn.click();                }    c. Apex Methods CreateContact and SaveUser are the pagereference methods which contains the code to create contact and user respectively.       After inserting the user inside the second apex method you can just set some public Properties in the page,        for ex:- created userid to get the user details and display in the page to show the acknowledgement to the users that the User is created.

    Read the article

  • GWT: Generate more complete crawl error report

    - by Mike
    I'm a developer in charge of managing Webmasters and related issues (including correcting crawl errors) for dozens (hundreds, maybe?) of active sites and as part of my duties I create a report of every discrepancy, including all pages generating a 404 and all pages that link to those pages. Currently within Webmaster Tools I'm able to download a csv file of all pages with a 404 response, but I'm then having to manually click on every single one of those links and copy the "linked from" field to paste into my spreadsheet. This is extremely tedious and seems unnecessary; I would expect the ability to download all that data at once. I'm ultimately looking for the end result of one csv file that has every url with a 404, but also has every url that links to each one of them. Am I overlooking this functionality somewhere or does anyone have a good solution? Edit 1 (2/11/2013): Example of what the csv output looks like now: URL,Response Code,News Error,Detected,Category http://www.abcdef.com/123.php,404,,11/12/13,Not found http://www.abcdef.com/456.php,404,,11/12/13,Not found Which is great, but let's say 123.php has 5 pages that link to it. Now I have to duplicate that row in my spreadsheet 4 more times, then go into Webmasters, get all the url's that link to the page, and add that data to my spreadsheet. The output I would prefer: URL,Response Code,Linked From,News Error,Detected,Category http://www.abcdef.com/123.php,404,http://www.ghijkl.com/naughtypage1.php,,11/12/13,Not found http://www.abcdef.com/123.php,404,http://www.ghijkl.com/naughtypage2.php,,11/12/13,Not found http://www.abcdef.com/123.php,404,http://www.ghijkl.com/naughtypage3.php,,11/12/13,Not found http://www.abcdef.com/456.php,404,http://www.ghijkl.com/naughtypage1.php,,11/12/13,Not found http://www.abcdef.com/456.php,404,http://www.ghijkl.com/naughtypage2.php,,11/12/13,Not found http://www.abcdef.com/456.php,404,http://www.ghijkl.com/naughtypage3.php,,11/12/13,Not found Note the (hypothetical) addition of a "Linked From" column, as well as the fact there are only 2 unique URL's now (like before) but all of the "Linked To" pages are shown in one report. Edit 2 (2/12/2013): To clarify, my question is less about detecting and correcting 404's, but more about generating a report of what Google has listed as errors. Oftentimes, these errors aren't even valid anymore but I still need documentation to show that Google detected a problem and that problem is now fixed. Many of the "linked from" url's I find are actually outdated, cached resources. For example, I'll frequently see that the linked-from url is the sitemap, which is actually an old sitemap cached by Google that points to an old page. Neither the sitemap or old page exist, but they still appear in my crawl error reports because they are cached resources.

    Read the article

  • JavaOne: Parleys.com, Spring Vs. Java EE and HTML5 tooling

    - by delabassee
    Parleys.com, a 2012 Duke's Choice Award winner, is an E-Learning platform that host content from different sources (conferences, JUGs meetings, etc.). There is a lot of technical content available for online but also offline consumption, including many sessions on Java EE. Parleys has just released, for free, all the Devoxx 2011 sessions (video and slides sync'ed!). From a technical point of view, Parleys.com is interesting as they have switched from Spring to Java EE 6 to avoid being locked in a proprietary framework. During the GlassFish Community BoF, Stephan Janssen (Parleys.com and Devoxx founder) also presented how GlassFish is used to support 2000 concurrent Parleys users over a cluster of 2 GlassFish instances. Talking about Java EE and/or Spring, Harshad Oak has posted an update on the 'Spring Vs. Java EE' panel discussion that took place on Tuesday. As Arun said standards such as Java EE does not necessarily refrain innovation: "JBoss Forge & Arquillian from RedHat are great examples of innovation in the JavaEE community. Standardization is important but innovation does continue even within that framework." Simplicity, productivity along with HTML5 are the driving themes of Java EE 7. In terms of simplicity and productivity, the developer experience can also be improved by the tooling. Every NetBeans release comes with a large set of improvements, the just released NetBeans 7.3 beta is no exception. The goal of ‘NB 7.3’s Project Easel’ is to improve HTML5 development, something that will be handy for Java EE 7 developers. Project Easel can, for example, communicate directly to Chrome's WebKit engine, this feature was shown during Sunday's Technical Keynote at the end of the Java EE section. In this beta release, Chrome and the embedded JavaFX browser are the only supported browsers but the NetBeans team plan to add support, over time, for other WebKit based browsers. NetBans 7.3 beta NetBeans 7.3 screenscasts Today (i.e. Wednesday 3rd) is also the final exhibition day, so make sure to visit the Java EE and the GlassFish pods on the Java DEMOgrounds (Hilton Grand Ballroom, 9:30 am - 5:00 pm). Finally, here are some Java EE and GlassFish related activities worth attending today if you are at JavaOne : Wednesday October 3rd Time Title Location 8:30-9:30am What's New in Servlet 3.1: An Overview Parc 55 Mission 8:30-9:30am Bean Validation 1.1: What's New Under the Hood Parc 55Cyril Magnin II/III 10:00-11:00am JSR 353: Java API for JSON Processing Parc 55 Mission 10:00-12:00pm Tutorial : Integrating Your Service into the GlassFish PaaS Platform Parc 55 Devisidero 11:30-12:30pm What's New in JSF: A Complete Tour of JSF 2.2 Parc 55Cyril Magnin I 11:30-12:30pm Best of Both Worlds: Java Persistence with NoSQL and SQL Parc 55 Mission 1:00-2:00pm Sharding Middleware to Achieve Elasticity and High Availability in the Cloud Parc 55Market Street 1:00-2:00pm Pimp My RESTful Java Applications Parc 55Cyril Magnin I 3:00-4:00pm Migrating Spring to Java EE Parc 55Cyril Magnin II/III 4:30-5:30pm JavaEE.Next(): Java EE 7, 8, and Beyond Parc 55Cyril Magnin II/III 4:30-5:30pm HTML5 WebSocket and Java Parc 55Cyril Magnin I 4:30-5:30pm Easy Middleware for Your Embedded Device Nikko Ballroom II/III

    Read the article

  • Partial Submit vs. Auto Submit

    - by Frank Nimphius
    Partial Submit ADF Faces adds the concept of partial form submit to JavaServer Faces 1.2 and beyond. A partial submit actually is a form submit that does not require a page refresh and only updates components in the view that are referenced from the command component PartialTriggers property. Another option for refreshing a component in response to a partial submit is call AdfContext.getCurrentInstance.addPartialTarget(component_instance_handle_goes_here)in a managed bean. If a form contains required fields that the user left empty invoking the partial submit, then errors are shown for each of the field as the full form gets submitted. Autosubmit An input component that has its autosubmit property set to true also performs a partial submit of the form. However, this time it doesn't submit the entire form but only the component that triggers the submit plus components referenced it in their PartialTriggers property. For example, consider a form that has three input fields inpA, inpB and inpC with autosubmit=true set on inpA and required=true set on inpB and inpC. use case 1: Running the view, entering data into inpA and then tabbing out of the field will submit the content for inpA but not for inpB and inpC. Further more, none of the required field settings on inpB and inpC causes an error. use case 2: You change the configuration of inpC and set its PartialTriggers property to point to the ID of component inpA. When rerunning the sample, entering a value into inpA and tabbing out of the field will now submit the inpA and inpC fields and thus show an error for the missing required value on inpC. Internally, using autosubmit=true on an input component sets the event root to just this field, which good to have in case of dependent field validation or behavior. The event root can extended to include other components by using the Partial Triggers property on these components to point to the input field that has autosubmit=true defined. PartialSubmit vs. AutoSubmit Partial submit set on a command component submits the whole form and leaves it to the developer to decide which UI component is refreshed in response. Client side required field validation (as well as the server side equivalent) is not disabled by executed in this scenario. Setting immediate=true on the command item to skip validation doesn't help as it would also skip the model update. Auto submit is a functionality on the input components and also performs a partial form submit. However, in addition an event root is defined that narrows the scope for the submitted data and thus the components that are validated on the request. To read more about this topic, see: http://docs.oracle.com/cd/E23943_01/web.1111/b31973/af_lifecycle.htm#CIAHCFJF

    Read the article

  • 3D terrain map with Hexagon Grids (XNA)

    - by Rob
    I'm working on a hobby project (I'm a web/backend developer by day) and I want to create a 3D Tile (terrain) engine. I'm using XNA, but I can use MonoGame, OpenGL, or straight DirectX, so the answer does not have to be XNA specific. I'm more looking for some high level advice on how to approach this problem. I know about creating height maps and such, there are thousands of references out there on the net for that, this is a bit more specific. I'm more concerned with is the approach to get a 3D hexagon tile grid out of my terrain (since the terrain, and all 3d objects, are basically triangles). The first approach I thought about is to basically draw the triangles on the screen in the following order (blue numbers) to give me the triangles for terrain (black triangles) and then make hexes out of the triangles (red hex). http://screencast.com/t/ebrH2g5V This approach seems complicated to me since i'm basically having to draw 4 different types of triangles. The next approach I thought of was to use the existing triangles like I did for a square grid and get my hexes from 6 triangles as follows http://screencast.com/t/w9b7qKzVJtb8 This seems like the easier approach to me since there are only 2 types of triangles (i would have to play with the heights and widths to get a "perfect" hexagon, but the idea is the same. So I'm looking for: 1) Any suggestions on which approach I should take, and why. 2) How would I translate mouse position to a hexagon grid position (especially when moving the camera around), for example in the second image if the mouse pointer were the green circle, how would I determine to highlight that hexagon and then translating that into grid coordinates (assuming it is 0,0)? 3) Any references, articles, books, etc - to get me going in the right direction. Note: I've done hex grid's and mouse-grid coordinate conversion before in 2d. looking for some pointers on how to do the same in 3d. The result I would like to achieve is something similar to the following: http :// www. youtube .com / watch?v=Ri92YkyC3fw (sorry about the youtube link, but it will only let me post 2 links in this post... same rep problem i mention below...) Thanks for any help! P.S. Sorry for not posting the images inline, I apparently don't have enough rep on this stack exchange site.

    Read the article

  • Help with Collision of spawned object(postion fixed) with objects that there are translating on screen

    - by Amrutha
    Hey guys I am creating a game using Corona SDK and so coding it in Lua. So there are 2 separate functions, To translate the hit objects and change their color when they are tapped The link below is the code I am using to for the first function http://developer.anscamobile.com/sample-code/fishies Spawn objects that will hit the translating objects on collision. Alos on collision the spawned object disappears and the translating object bears a color(indicating the collision). In addition the size of this spawned object is dependent on i/p volume level. The function I have written is as follows, --VOICE INPUT CODE local r = media.newRecording() r:startRecording() r:startTuner() --local function newBar() -- local bar = display.newLine( 0, 0, 1, 0 ) -- bar:setColor( 0, 55, 100, 20 ) -- bar.width = 5 -- bar.y=400 -- bar.x=20 -- return bar --end local c1 = display.newImage("str-minion-small.png") c1.isVisible=false local c2 = display.newImage("str-minion-mid.png") c2.isVisible=false local c3 = display.newImage("str-minion-big.png") c3.isVisible=false --SPAWNING local function spawnDisk( event ) local phase = event.phase local volumeBar = display.newLine( 0, 0, 1, 0 ) volumeBar.y = 400 volumeBar.x = 20 -- volumeBar.isVisible=false local v = 20*math.log(r:getTunerVolume()) local MINTHRESH = 30 local LEFTMARGIN = 20 local v2 = MINTHRESH + math.max (v, -MINTHRESH) v2 = (display.contentWidth - 1 * LEFTMARGIN ) * v2 / MINTHRESH volumeBar.xScale = math.max ( 20, v2 ) local l = volumeBar.xScale local cnt1 = 0 local cnt2 = 0 local cnt3 = 0 local ONE =1 local val = event.numTaps --local px=event.x --local py=event.y if "ended" == phase then --audio.play( popSound ) --myLabel.isVisible = false if l > 50 and l <=150 then -- c1:setFillColor(10,105,0) -- c1.isVisible=false c1.x=math.random( 10, 450 ) c1.y=math.random( 10, 300 ) physics.addBody( c1, { density=1, radius=10.0 } ) c1.isVisible=true cnt1= cnt1+ ONE return c1 elseif l > 100 and l <=250 then --c2:setFillColor(200,10,0) c2.x=math.random( 10, 450 ) c2.y=math.random( 10, 300 ) physics.addBody( c2, { density=2, radius=9000.0 } ) c2.isVisible=true cnt2= cnt2+ ONE return c2 elseif l >=250 then c3.x=math.random( 40, 450 ) c3.y=math.random( 40, 300 ) physics.addBody( c3, { density=2, radius=7000.0 , bounce=0.0 } ) c3.isVisible=true cnt3= cnt3+ ONE return c3 end end end buzzR:addEventListener( "touch", spawnDisk ) -- touch the screen to create disks Now both functions work fine independently but there is no collision happening. Its almost as if the translating object and the spawn object are on different layers. The translating object passes through the spawn object freely. Can anyone please tell me how to resolve this problem. And how can I get them to collide. Its my first attempt at game development, that too for a mobile platform so would appreciate all help. Also if I have not been specific do let me know. I ll try to frame the query better :). Thanks in advance.

    Read the article

  • Myths about Coding Craftsmanship part 2

    - by tom
    Myth 3: The source of all bad code is inept developers and stupid people When you review code is this what you assume?  Shame on you.  You are probably making assumptions in your code if you are assuming so much already.  Bad code can be the result of any number of causes including but not limited to using dated techniques (like boxing when generics are available), not following standards (“look how he does the spacing between arguments!” or “did he really just name that variable ‘bln_Hello_Cats’?”), being redundant, using properties, methods, or objects in a novel way (like switching on button.Text between “Hello World” and “Hello World “ //clever use of space character… sigh), not following the SOLID principals, hacking around assumptions made in earlier iterations / hacking in features that should be worked into the overall design.  The first two issues, while annoying are pretty easy to spot and can be fixed so easily.  If your coding team is made up of experienced professionals who are passionate about staying current then these shouldn’t be happening.  If you work with a variety of skills, backgrounds, and experience then there will be some of this stuff going on.  If you have an opportunity to mentor such a developer who is receptive to constructive criticism don’t be a jerk; help them and the codebase will improve.  A little patience can improve the codebase, your work environment, and even your perspective. The novelty and redundancy I have encountered has often been the use of creativity when language knowledge was perceived as unavailable or too time consuming.  When developers learn on the job you get a lot of this.  Rather than going to MSDN developers will use what they know.  Depending on the constraints of their assignment hacking together what they know may seem quite practical.  This was not stupid though I often wonder how much time is actually “saved” by hacking.  These issues are often harder to untangle if we ever do.  They can also grow out of control as we write hack after hack to make it work and get back to some development that is satisfying. Hacking upon an existing hack is what I call “feeding the monster”.  Code monsters are anti-patterns and hacks gone wild.  The reason code monsters continue to get bigger is that they keep growing in scope, touching more and more of the application.  This is not the result of dumb developers. It is probably the result of avoiding design, not taking the time to understand the problems or anticipate or communicate the vision of the product.  If our developers don’t understand the purpose of a feature or product how do we expect potential customers to do so? Forethought and organization are often what is missing from bad code.  Developers who do not use the SOLID principals should be encouraged to learn these principals and be given guidance on how to apply them.  The time “saved” by giving hackers room to hack will be made up for and then some. Not as technical debt but as shoddy work that if not replaced will be struggled with again and again.  Bad code is not the result of dumb developers (usually) it is the result of trying to do too much without the proper resources and neglecting the right thing that needs doing with the first thoughtless thing that comes into our heads. Object oriented code is all about relationships between objects.  Coders who believe their coworkers are all fools tend to write objects that are difficult to work with, not eager to explain themselves, and perform erratically and irrationally.  If you constantly find you are surrounded by idiots you may want to ask yourself if you are being unreasonable, if you are being closed minded, of if you have chosen the right profession.  Opening your mind up to the idea that you probably work with rational, well-intentioned people will probably make you a better coder and it might even make you less grumpy.  If you are surrounded by jerks who do not engage in the exchange of ideas who do not care about their customers or the durability of the code you are building together then I suggest you find a new place to work.  Myth 4: Customers don’t care about “beautiful” code Craftsmanship is customer focused because it means that the job was done right, the product will withstand the abuse, modifications, and scrutiny of our customers.  Users can appreciate a predictable timeline for a release, a product delivered on time and on budget, a feature set that does not interfere with the task(s) it is supporting, quick turnarounds on exception messages, self healing issues, and less issues.  These are all hindered by skimping on craftsmanship.  When we write data access and when we write reusable code.   What do you think?  Does bad code come primarily from low IQ individuals?  Do customers care about beautiful code?

    Read the article

  • Started wrong with a project. Should I start over?

    - by solidsnake
    I'm a beginner web developer (one year of experience). A couple of weeks after graduating, I got offered a job to build a web application for a company whose owner is not much of a tech guy. He recruited me to avoid theft of his idea, the high cost of development charged by a service company, and to have someone young he can trust onboard to maintain the project for the long run (I came to these conclusions by myself long after being hired). Cocky as I was back then, with a diploma in computer science, I accepted the offer thinking I can build anything. I was calling the shots. After some research I settled on PHP, and started with plain PHP, no objects, just ugly procedural code. Two months later, everything was getting messy, and it was hard to make any progress. The web application is huge. So I decided to check out an MVC framework that would make my life easier. That's where I stumbled upon the cool kid in the PHP community: Laravel. I loved it, it was easy to learn, and I started coding right away. My code looked cleaner, more organized. It looked very good. But again the web application was huge. The company was pressuring me to deliver the first version, which they wanted to deploy, obviously, and start seeking customers. Because Laravel was fun to work with, it made me remember why I chose this industry in the first place - something I forgot while stuck in the shitty education system. So I started working on small projects at night, reading about methodologies and best practice. I revisited OOP, moved on to object-oriented design and analysis, and read Uncle Bob's book Clean Code. This helped me realize that I really knew nothing. I did not know how to build software THE RIGHT WAY. But at this point it was too late, and now I'm almost done. My code is not clean at all, just spaghetti code, a real pain to fix a bug, all the logic is in the controllers, and there is little object oriented design. I'm having this persistent thought that I have to rewrite the whole project. However, I can't do it... They keep asking when is it going to be all done. I can not imagine this code deployed on a server. Plus I still know nothing about code efficiency and the web application's performance. On one hand, the company is waiting for the product and can not wait anymore. On the other hand I can't see myself going any further with the actual code. I could finish up, wrap it up and deploy, but god only knows what might happen when people start using it. What do you think I should do?

    Read the article

  • Why everybody should do Sales!

    - by FelixWehmeyer
    I speak with many business students and ask them what job they want to get into. Most of them tell me they want a job in Marketing, Management Consulting or Finance. I hardly ever hear “Sales, that is what I want to do”, and I often wonder why. I would like to start with a quote from Zig Ziglar, a successful salesman: "Nothing happens until someone sells something." But to get back to the main point, why wouldn’t you want to get in sales? When people think of sales, they picture a typical salesman in their head and think that selling is scary and all about manipulating, pressuring and pushing someone into buying something they don’t need. Are these stereotypes accurate? I don’t believe so: So why should you want to be in sales? If you think about selling as providing the solution for the problem and talking about the benefits of making a decision, then every job in this world comes out of selling. In every job you deal with coworkers that you want to convince of your ideas or convincing your boss that the project you want to work on is good for the company.  These days, consumers and businesses are very well informed about services and products. When we are talking about highly complex products, such as IT solutions, businesses don’t accept your run-of-the-mill salesman who is pushing a sale. These are often long projects where salespeople have a consulting and leading role. Salespeople need to be able to consult companies and customers with their problem and convince a client that their solution is the best fit. Next to the fact that sales, is by far, not as scary and shady as you thought, there are a few points that will make you want to consider a sales career: Negotiating skills – When you are in sales you will learn how to negotiate. Salespeople learn to listen to their customers and try to make them happy, overcoming objections and come to a final agreement that both parties are happy with. Persistence/Challenge – As a salesperson you will often hear a negative answer, in a sales role you will start to embrace this and see a ‘no’ as a challenge not as a rejection. This attitude change can help you a lot in your career, but also in your personal life. You will become more optimistic and gain a go-getter attitude. Salary – As salespeople are seen as the moneymakers for the company, companies often reward their sales teams generously. Most likely in a sales role, you will receive a good basic salary and often you get nice bonuses on top of that based on your performance. Oracle is, for instance, the company that offers the highest average commission in the world. Further you can expect many other benefits as companies know that there is a high demand for good salespeople. Teamwork – Sales is a lot like having your own business, you are responsible for your own territory or set of clients. You are the one who is responsible for the revenue coming from that territory. So in order to gain revenue you will have to work together with many departments and people to make that happen. Every (potential) client could be seen as a different project, and you are the project leader. Understanding customers and the business – From any job that you choose sales will get you the most insight in the market. Salespeople are usually well-connected, talk with different customers and learn about the market and are up-to-date about all latest changes. Even if you want to change to a different role in the long run, you have a great head start as you understand the market and customers like no one else. Job security – Look at all the job postings out there. Many of them are sales-related. So if you want to have a steady job, plenty of choice and companies willing to invest in you, sales could be something for you.  Are you interested in exploring a sales career? At Oracle we are always looking for good sales professionals and fresh graduates who want to get into sales! For many languages such as Flemish, Dutch, German, French, Swedish and Norwegian (and more) we are currently looking for graduates who want to develop their career in Oracle. Please have a look at this article for the experience of a Business Development Consultant at Oracle in Dublin. Want to learn more about this job check out this link or send an email to jessica.ebbelaar-at-oracle.com! Have a look at our website http://campus.oracle.com for all of our other latest sales and non-sales vacancies!

    Read the article

  • Guiding Management to the Correct Decision

    - by Blumer
    My supervisor (also a developer) and I have a running joke about writing a book called "Managing From Beneath: Subversively Guiding Management to the Right Decision" and including a number of "techniques" we've developed for helping those who make the decisions to make the right ones. So far, we've got (cynicism warning!): BIC It! BIC stands for "Bury In Committee." When a bad idea comes up that someone wants to champion, we try to get it deferred to a committee for input. Typically it will either get killed outright (especially if other members of the committee are competing for you as a resource), or it will be hung up long enough that the proponent forgets about it. Smart, Stupid, or Expensive? When someone gets a visionary idea, offer them three ways to do it: a smart way, a stupid way, and an expensive way. The hope is that you've at least got a 2/3 shot of not having to do it the way that makes a piece of your soul die. All-Pro. It's a preemptive pro/con list in which you get into the mind of the (pr)opponent and think what would be cons against doing it your way. Twist them into pros and present them in your pro list before they have a chance to present them as cons. Dependicitis. Link pending decisions together, ideally with the proponent's pet project as the final link in the chain. Use this leverage to force action on those that have been put off. Preemptive Acceptance. Sometimes it's clear that management is going to go a particular direction regardless of advice to the contrary, and it's time to make the best of it. Take the opportunity to get something else you need, though. Approach the sponsor out of the blue and take the first step: "You know, I've been thinking about it, and while it's not the route I would advise, as long as we can get the schedule and budget for Project Awesome loosened up, I can work some magic to make your project fly." So ... what techniques have you come up with to try to head off the problem projects or make the best of what may come?

    Read the article

  • Web Development Goes Pre-Visual InterDev

    - by Ken Cox [MVP]
    As a longtime and hardcore ASP.NET webforms developer, I’m finding the new client-side development world a bit of a grind.  I love learning new technologies, but I can’t help feeling we’ve regressed and lost our old RAD advantage as we move heavy lifting to the client. For my latest project, I’m using Telerik’s KendoUI in Visual Studio 2012. To say I feel clumsy writing this much JavaScript is an understatement. It seems like the only safe way to ‘write’ this code is by copying a working snippet from someone else and pasting it into my HTML page.  For me, JavaScript has largely been for small UI tasks like client-side validation and a bit of AJAX – and often emitted by a server-side control. I find myself today lost in nests of curly braces that Ctrl+K, Ctrl+D doesn’t seem to understand that well either. IntelliSense, my old syntax saviour, doesn’t seem to have kept up with this cobweb of code either. Code completion? Not seeing it. As I fumbled about this evening, I thought about how web development rocketed forward when Microsoft introduced Visual InterDev. Its Design-Time Controls (DTCs) changed the way we created sites. All the iterations of Visual Studio have enhanced that server-side experience where you let a tool write the bulk of the code and manually finesse it from there. What happened? Why am I typing  properties and values (especially default values!) into VS 2012 to get a client-side grid on a page? Where are the drag and drop objects that traditionally provided 70 percent of the mark-up and configuration?  Did we forget how to write Property Pages where you enter a value and the correct syntax appears magically in the source code? To me, the tooling was looking the other way as the scene shifted from server-side code to nimble client-side script. It’ll have to catch up. Although JavaScript is the lingua franca of web browsers, the language is unwieldy, tough to maintain, and messy to debug. If a .NET JIT compiler can turn our VB, F#, and C# source code into an Intermediate Language that executes on a computer, I don’t see why there can’t be a client-side compiler that turns a .NET language into JavaScript that browsers can consume.

    Read the article

< Previous Page | 359 360 361 362 363 364 365 366 367 368 369 370  | Next Page >