Search Results

Search found 19212 results on 769 pages for 'side projects'.

Page 686/769 | < Previous Page | 682 683 684 685 686 687 688 689 690 691 692 693  | Next Page >

  • ADF Enterprise Application Development - Made Simple (Book Review)

    - by Frank Nimphius
      Sten E. Vesterli wrote the "Oracle ADF Enterprise Application Development – Made Simple" book published by Packt Publishing in 2011 http://www.packtpub.com/oracle-adf-enterprise-application-development/book A common question on OTN, but also when talking to clients or customers is about where and how to start your ADF application development. Especially when the current programming background is not in Java, but 4 GL or PLSQL, developers often look for answers to the following questions: · How long does it take to learn Oracle ADF ? · How long does it take to replace a Forms application with ADF ? · How many developers do I need? · Do I need to know Java to use ADF and if yes, how good do I need to know this? · How do I structure my programming files, organizing them in JDeveloper work spaces, projects and libraries? · What is best practices for naming Java packages and how to void naming conflicts in ADF in general? · How many Application Modules do I need or should I create? · How to test applications? Sten Vesterli answers all of the above questions and more in his book http://www.packtpub.com/oracle-adf-enterprise-application-development/book , which makes it great value add to the 3 existing Oracle ADF books. In order of complexity (which also is the order in which reading the available Oracle ADF books makes sense), in my opinion, Sten's book should come second – though it also is useful to those that are already more advanced with Oracle ADF. So if you are absolutely new to Oracle ADF, then the order of books to read to get you up on an expert level should be: 1. Grant Ronald; "Quick Start Guide to Oracle Fusion Development: Oracle JDeveloper and Oracle ADF" (McGraw Hill 2010) 2. Sten Vesterli; "Oracle ADF Enterprise Application Development – Made Simple" (Packt Publishing 2011) 3. Duncan Mills, Peter Koletzke; " Oracle JDeveloper 11g Handbook: A Guide to Fusion Web Development" (McGraw Hill 2009) 4. Frank Nimphius, Lynn Munsinger; " Oracle Fusion Developer Guide: Building Rich Internet Applications with Oracle ADF Business Components and Oracle ADF Faces" (McGraw Hill 2010) If you are not new to Oracle ADF and Orace JDeveloper, then buy Sten Vesterli's book anyway. It is worth it and you want to have it on your book shelf. See below the table of content to get a better idea of what this book covers: · Chapter 1: The ADF Proof of Concept · Chapter 2: Estimating the Effort · Chapter 3: Getting Organized · Chapter 4: Productive Teamwork · Chapter 5: Prepare to Build · Chapter 6: Building the Enterprise Application · Chapter 7: Testing your Application · Chapter 8: Look and Feel · Chapter 9: Customizing the Functionality · Chapter 10: Securing your ADF Application · Chapter 11: Package and Deliver · Appendix: Internationalization The book is written with a lot of good humor, which makes the read very enjoyable (from a geek's perspective, of course). My favorite quote – just in case you are interested - is from page 97, when Sten talks about getting organized: " Stop sending e-mails to your team. Just stop it. E-mail is so last century.…" So true, so true! This quote's runner up is the "boss key" on page 128 where Sten talks about productivity and how Oracle Team Productivity Center (TPC) can help you with this. Quotes like these stick to your brains and make sure you never forget. Go for it!

    Read the article

  • NetBeans PHP Community Council

    - by Tomas Mysik
    Hi all, today we would like to inform all of you that now you have a chance to improve NetBeans via NetBeans PHP Community Council. The author of this activity is Timur Poperecinii and he would like to tell you a few words about it. Hello passionate technical people, First of all let me introduce myself: my name is Timur, I’m a developer from Moldova (that little country between Romania and Ukraine), I develop mostly in .NET and JQuery, but I love to learn more, not being an expert I am familiar with Java (Struts2, Play), PHP (Symfony2), Ruby (Rails), Sencha Touch 2 and other technologies. I was “introduced” in PHP recently by a client of mine who requested to make the work specifically in PHP. Let me tell you a little story about my experience with open source and IDEs: when I was studying in university in 2007 I think, I did a simple little application in PHP and thought “Damn, if only there was a good IDE for PHP so I could relax and no having to remember all the function names”, then when I searched on internet pretty much everyone was using Vim or Emacs on Linux, but it had no autocomplete anyway, just syntax highlighting. I remember using some tool like Notepad++ I think. Nowadays everything changed, we have highlighting and autocomplete for about all standard things in PHP in many IDEs. I use NetBeans for PHP, and I really am happy with the experience I have there with standard PHP code, but for frameworks I still think there is lots of room for improvements. For example we have some Symfony 2 and Twig support. But I’d love to see more of that coming, for example I’m a big fan of file templates, where the main goal is to not waste time on writing over and over again something that can be generated, and it counts even more when you don’t have a lot of autocomplete. So what I thought, “Hey I know Java a little, and NetBeans has plugins, so may be it worth trying to do a file templates plugin”, and so I did, you can find details about my Unified Udevi Symfony2 Plugin for NetBeans 7.2 on my blog. It wasn’t hard, and it even was fun! Give back to open source Now think a little, NetBeans is an open source project and PHP support is just a part of it, so the resources are pretty limited in this area. But we as a community that uses this product, want to have the best possible experience with PHP and frameworks(!!!). So why don’t we GIVE BACK TO OPENSOURCE ? Imagine an IDE that can do all the things you wanted + it is free. Now how far is NetBeans from that point? I guess not so far – you might miss a little niche thing that you use on a daily basis, but then the question appears why don’t you make it happen on your own? NetBeans PHP Community Council What I proposed is to create a NetBeans PHP Community Council that will be formed of people willing to change something, willing to create plugins for their own needs and for the needs of the community, test the plugins created by them too, and basically evolve NetBeans in direction they want to reach. I already talked with the NetBeans PHP team. They are only happy to help this Council, with technical advises, opening some APIs we might need to have access to, and other things. One important thing to mention is that this Council is a Community project, so though we’ll have direct discussions with NetBeans PHP Dev team, NetBeans is not the leading force here, it is the community. You can see more details about the goals and structure I proposed at NetBeans PHP Community Council wiki page. We use this mail list: [email protected] for discussions and topics related to the Council. How can I join To join the NetBeans PHP Community Council please send an email to [email protected] with the subject of the mail starting with [Council New Member]. You can subscribe to this mail list here:http://netbeans.org/projects/php/lists. in your mail please indicate your location, age and experience both in Java and PHP. I need these data to assign you to a team. A response will be send to you with your next assignment and some people to contact. I really hope that you’ll make a step forward and try to make your everyday use of NetBeans even more fun.

    Read the article

  • How to check if a cdrom is in the tray remotely (via ssh)?

    - by adempewolff
    I have a server running Ubuntu 10.04 (it's on the other side of the world and I haven't built up the wherewithal to upgrade it remotely yet) and I have been told that there is a CD in one of it's two CD drives. I want to rip an image of the cd and then download it to my local computer (I don't need help with either of these steps). However, I cannot seem to confirm whether or not there actually is a CD in the drive as I was told. It did not automatically mount anywhere (which I'm thinking might just be a result of it being a headless server not running X, nautilus, or any of the other nice user friendly things). There are two CD drives connected via SCSI: austin@austinvpn:/proc/scsi$ cat /proc/scsi/scsi Attached devices: Host: scsi0 Channel: 00 Id: 00 Lun: 00 Vendor: ATA Model: WDC WD400EB-75CP Rev: 06.0 Type: Direct-Access ANSI SCSI revision: 05 Host: scsi1 Channel: 00 Id: 00 Lun: 00 Vendor: Lite-On Model: LTN486S 48x Max Rev: YDS6 Type: CD-ROM ANSI SCSI revision: 05 Host: scsi1 Channel: 00 Id: 01 Lun: 00 Vendor: SAMSUNG Model: CD-R/RW SW-248F Rev: R602 Type: CD-ROM ANSI SCSI revision: 05 However when I try mounting either of these devices (and every other device that could possibly be the cd-drive), it says no medium found: austin@austinvpn:/proc/scsi$ sudo mount -t iso9660 /dev/scd1 /cdrom mount: no medium found on /dev/sr1 austin@austinvpn:/proc/scsi$ sudo mount -t iso9660 /dev/scd0 /cdrom mount: no medium found on /dev/sr0 austin@austinvpn:/proc/scsi$ sudo mount -t iso9660 /dev/cdrom /cdrom mount: no medium found on /dev/sr1 austin@austinvpn:/proc/scsi$ sudo mount -t iso9660 /dev/cdrom1 /cdrom mount: no medium found on /dev/sr0 austin@austinvpn:/proc/scsi$ sudo mount -t iso9660 /dev/cdrw /cdrom mount: no medium found on /dev/sr1 Here are the contents of my /dev folder: austin@austinvpn:/proc/scsi$ ls /dev agpgart loop6 ram6 tty10 tty38 tty8 austinvpn loop7 ram7 tty11 tty39 tty9 block lp0 ram8 tty12 tty4 ttyS0 bsg mapper ram9 tty13 tty40 ttyS1 btrfs-control mcelog random tty14 tty41 ttyS2 bus mem rfkill tty15 tty42 ttyS3 cdrom net root tty16 tty43 urandom cdrom1 network_latency rtc tty17 tty44 usbmon0 cdrw network_throughput rtc0 tty18 tty45 usbmon1 char null scd0 tty19 tty46 usbmon2 console oldmem scd1 tty2 tty47 usbmon3 core parport0 sda tty20 tty48 usbmon4 cpu_dma_latency pktcdvd sda1 tty21 tty49 vcs disk port sda2 tty22 tty5 vcs1 dri ppp sda5 tty23 tty50 vcs2 ecryptfs psaux sg0 tty24 tty51 vcs3 fb0 ptmx sg1 tty25 tty52 vcs4 fd pts sg2 tty26 tty53 vcs5 full ram0 shm tty27 tty54 vcs6 fuse ram1 snapshot tty28 tty55 vcs7 hpet ram10 snd tty29 tty56 vcsa input ram11 sndstat tty3 tty57 vcsa1 kmsg ram12 sr0 tty30 tty58 vcsa2 log ram13 sr1 tty31 tty59 vcsa3 loop0 ram14 stderr tty32 tty6 vcsa4 loop1 ram15 stdin tty33 tty60 vcsa5 loop2 ram2 stdout tty34 tty61 vcsa6 loop3 ram3 tty tty35 tty62 vcsa7 loop4 ram4 tty0 tty36 tty63 vga_arbiter loop5 ram5 tty1 tty37 tty7 zero And here is my fstab file: austin@austinvpn:/proc/scsi$ cat /etc/fstab # /etc/fstab: static file system information. # # Use 'blkid -o value -s UUID' to print the universally unique identifier # for a device; this may be used with UUID= as a more robust way to name # devices that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 /dev/mapper/austinvpn-root / ext4 errors=remount-ro 0 1 # /boot was on /dev/sda1 during installation UUID=ed5520ae-c690-4ce6-881e-3598f299be06 /boot ext2 defaults 0 2 /dev/mapper/austinvpn-swap_1 none swap sw 0 0 Am I missing something/doing something wrong, or is there just no CD in the drive or is the drive possibly broken? Is there any nice command to list devices with mountable media? Thanks in advance for any help!

    Read the article

  • Are there too many qualified software development engineers chasing too few jobs?

    - by T Gregory
    I am trying to write this question in a non-argumentative way, but it is quite emotionally charged for some, so please bear with me. In the U.S., we hear constantly from CEOs that they cannot find enough qualified software engineers. In fact, it is the position of the U.S. government that demand for software engineering talent outpaces supply. This position can be clearly seen in the granting of tens of thousands of H1B visas, but also in the following excerpt from the official 2010-11 Bureau of Labor Statistics Occupational Outlook Handbook: Employment of computer software engineers is expected to increase by 32 percent from 2008-2018, which is much faster than the average for all occupations. In addition, this occupation will see a large number of new jobs, with more than 295,000 created between 2008 and 2018. Demand for computer software engineers will increase as computer networking continues to grow. For example, expanding Internet technologies have spurred demand for computer software engineers who can develop Internet, intranet, and World Wide Web applications. Likewise, electronic data-processing systems in business, telecommunications, healthcare, government, and other settings continue to become more sophisticated and complex. Implementing, safeguarding, and updating computer systems and resolving problems will fuel the demand for growing numbers of systems software engineers. New growth areas will also continue to arise from rapidly evolving technologies. The increasing uses of the Internet, the proliferation of Web sites, and mobile technology such as the wireless Internet have created a demand for a wide variety of new products. As more software is offered over the Internet, and as businesses demand customized software to meet their specific needs, applications and systems software engineers will be needed in greater numbers. In addition, the growing use of handheld computers will create demand for new mobile applications and software systems. As these devices become a larger part of the business environment, it will be necessary to integrate current computer systems with this new, more mobile technology. However, from the the employee side of the equation, we often hear the opposite. Many of the stories of SDEs with graduate degrees and decades of experience on the unemployment line, or the big tech interview war stories, are anecdotal, for sure. But, there is one piece of data that is neither anecdotal nor transitory, and that is the aggregate decisions of millions of undergraduates of what degree to pursue. Here, a different picture emerges from the data, and that picture is not good for the software profession. According the most recent Taulbee Survey from Computer Research Association, undergrad degree production in CS and CE has fallen nearly 60% since 2004. (Undergrad enrollments have ticked up in the past two years, but only modestly). Here we see that a basic disconnect between what corporate CEOs and the US government are saying and what potential employees really think about job prospects in software engineering. So my questions are these. Who are we to believe? Is there an acute talent shortage, or is there a long-term structural oversupply in the SDE labor market? Can anyone provide reliable data on long-term unemployment among SDEs? How many are leaving the profession due to lack of work? Real data is most helpful. Thanks.

    Read the article

  • Jersey non blocking client

    - by Pavel Bucek
    Although Jersey already have support for making asynchronous requests, it is implemented by standard blocking way - every asynchronous request is handled by one thread and that thread is released only after request is completely processed. That is OK for lots of cases, but imagine how that will work when you need to do lots of parallel requests. Of course you can limit (and its really wise thing to do, you do want control your resources) number of threads used for asynchronous requests, but you'll get another maybe not pleasant consequence - obviously processing time will incerase. There are few projects which are trying to deal with that problem, commonly named as async http clients. I didn't want to "re-implement a wheel" and I decided I'll use AHC - Async Http Client made by Jeanfrancois Arcand. There is also interesting implementation from Apache - HttpAsyncClient, but it is still in "very early stages of development" and others haven't been in similar or better shape as AHC. How this works? Non-blocking clients allow users to make same asynchronous requests as we can do with standard approach but implementation is different - threads are better utilized, they don't spend most of time in idle state. Simply described - when you make a request (send it over the network), you are waiting for reply from other side. And there comes main advantage of non-blocking approach - it uses these threads for further work, like making other requests or processing responses etc.. Idle time is minimized and your resources (threads) will be far better used. Who should consider using this? Everyone who is making lots of asynchronous requests. I haven't done proper benchmark yet, but some simple dumb tests are showing huge improvement in cases where lots of concurrent asynchronous requests are made in short period. Last but not least - this module is still experimental, so if you don't like something or if you have ideas for improvements/any feedback, feel free to comment this blog post, send mail to [email protected] or contact me personally. All feedback is greatly appreciated! maven dependency (will be present in java.net maven 2 repo by the end of the day): link: http://download.java.net/maven/2/com/sun/jersey/experimental/jersey-non-blocking-client <dependency> <groupId>com.sun.jersey.experimental</groupId> <artifactId>jersey-non-blocking-client</artifactId> <version>1.9-SNAPSHOT</version> </dependency> code snippet: ClientConfig cc = new DefaultNonBlockingClientConfig(); cc.getProperties().put(NonBlockingClientConfig.PROPERTY_THREADPOOL_SIZE, 10); // default value, feel free to change Client c = NonBlockingClient.create(cc); AsyncWebResource awr = c.asyncResource("http://oracle.com"); Future<ClientResponse> responseFuture = awr.get(ClientResponse.class); // or awr.get(new TypeListener<ClientResponse>(ClientResponse.class) { @Override public void onComplete(Future<ClientResponse> f) throws InterruptedException { ... } }); javadoc (temporary location, won't be updated): http://anise.cz/~paja/jersey-non-blocking-client/

    Read the article

  • Tap Into Tier 1 ERP

    - by Christine Randle
    By: Larry Simcox, Senior Director, Accelerate Corporate Programs     Your customers aren’t satisfied with so-so customer service. Your employees aren’t happy with below average salaries.   So why would you settle for second-rate or tier 2 ERP?   A recent report from Nucleus Research found that usability improvements and rapid implementation tools are simplifying deployments, putting tier 1 enterprise applications well within reach for midsize companies. So how can your business tap into the power of tier 1 ERP? And what are the best ways to manage a deployment?   The Reputation of ERP Implementations Overhauling internal operations and implementing ERP can be a challenging endeavor for organizations of all sizes. Midsize companies often shy away from enterprise-class ERP, fearing complexity, limited resources and perceived challenging deployments. Many forward thinking executives experienced ERP implementations in the late 90s and early 2000s and embrace a strategy to grow their business by investing in a foundation for innovation and growth via ERP modernization projects.   In recent years there has been a strong consumerization of IT with enterprise applications and their delivery methods evolving to become more user-friendly.  Today, usability improvements and modern implementation tools have made top-tier ERP solutions more accessible for growing companies. Nucleus found that because enterprise-class software can now be rapidly deployed, the payback is quicker, the risks are lower, the software is less disruptive and overall, companies can differentiate themselves from their competitors and achieve more success with the advantages these types of systems deliver.   Tapping into the power of tier 1 ERP can be made much easier with Oracle Accelerate solutions. Created by Oracle's expert partners and reviewed by Oracle, Oracle Accelerate solutions are simple to deploy, industry-specific, packaged solutions that provide a fast time to benefit, which means getting the right solution in place quickly, inexpensively with a controlled scope and predictable returns.   How are growing midsize companies successfully deploying tier 1 ERP? According to Nucleus Research, companies can increase success in their tier 1 ERP deployments by limiting customization, planning a rapid go-live, bettering communication across departments, and considering different delivery options. Oracle Accelerate solutions incorporate industry best practices and encourage rapid deployments. And even more, Nucleus found customers deploying tier 1 ERP with Oracle that had used Oracle Business Accelerators, Oracle’s rapid implementation tools, reduced the time to deploy Oracle E-Business Suite by at least 50 percent.   Industrial manufacturer L.H. Dottie is one company that needed ERP with enhanced capabilities to support its growth and streamline business processes. Using out-of-the-box configuration of Oracle E-Business Suite modules (provided by Oracle Business Accelerators and delivered by Oracle Partner C3 Business Solutions), L.H. Dottie was able to speed its implementation and went live in just six and a half months. With tier 1 ERP, the company was able to grow and do its business better, automating a variety of processes, accelerating product delivery and gaining powerful data analysis capabilities that helped drive its business into further regions. See more details about their ERP implementation here.   Tier 1 enterprise-class applications have proven to boost the success of Oracle’s midsize customers. As Nucleus Research iterates, companies poised for growth or seeking to compete against larger competitors absolutely can tap into the power of tier 1 ERP and position themselves as enterprise-class through leveraging Oracle Accelerate solutions.   You can learn more here about The Evolving Business Case for Tier - 1 ERP in Midsize Companies in our exclusive webcast with Nucleus.   ###  

    Read the article

  • When is your interview?

    - by Rob Farley
    Sometimes it’s tough to evaluate someone – to figure out if you think they’d be worth hiring. These days, since starting LobsterPot Solutions, I have my share of interviews, on both sides of the desk. Sometimes I’m checking out potential staff members; sometimes I’m persuading someone else to get us on board for a project. Regardless of who is on which side of the desk, we’re both checking each other out. The world is not how it was some years ago. I’m pretty sure that every time I walk into a room for an interview, I’ve searched for them online, and they’ve searched for me. I suspect they usually have the easier time finding me, although there are obviously other Rob Farleys in the world. They may have even checked out some of my presentations from conferences, read my blog posts, maybe even heard me tell jokes or sing. I know some people need me to explain who I am, but for the most part, I think they’ve done plenty of research long before I’ve walked in the room. I remember when this was different (as it could be for you still). I remember a time when I dealt with recruitment agents, looking for work. I remember sitting in rooms having been giving a test designed to find out if I knew my stuff or not, and then being pulled into interviews with managers who had to find out if I could communicate effectively. I’d need to explain who I was, what kind of person I was, what my value-system involved, and so on. I’m sure you understand what I’m getting at. (Oh, and in case you hadn’t realised, it’s a T-SQL Tuesday post, this month about interviews.) At TechEd Australia some years ago (either 2009 or 2010 – I forget which), I remember hearing a comment made during the ‘locknote’, the closing session. The presenter described a conversation he’d heard between two girls, discussing a guy that one of them had just started dating. The other girl expressed horror at the fact that her friend had met this guy in person, rather than through an online dating agency. The presenter pointed out that people realise that there’s a certain level of safety provided through the checks that those sites do. I’m not sure I completely trust this, but I’m sure it’s true for people’s technical profiles. If I interview someone, I hope they have a profile. I hope I can look at what they already know. I hope I can get samples of their work, and see how they communicate. I hope I can get a feel for their sense of humour. I hope I already know exactly what kind of person they are – their value system, their beliefs, their passions. Even their grammar. I can work out if the person is a good risk or not from who they are online. If they don’t have an online presence, then I don’t have this information, and the risk is higher. So if you’re interviewing with me, your interview started long before the conversation. I hope it started before I’d ever heard of you. I know the interview in which I’m being assessed started before I even knew there was a product called SQL Server. It’s reflected in what I write. It’s in the way I present. I have spent my life becoming me – so let’s talk! @rob_farley

    Read the article

  • BizTalk 2009 - Pipeline Component Wizard

    - by Stuart Brierley
    Recently I decided to try out the BizTalk Server Pipeline Component Wizard when creating a new pipeline component for BizTalk 2009. There are different versions of the wizard available, so be sure to download the appropriate version for the BizTalk environment that you are working with. Following the download and expansion of the zip file, you should be left with a Visual Studio solution.  Open this solution and build the project. Following this installation is straight foward - locate and run the built setup.exe file in the PipelineComponentWizard Setup project and click through the small number of installation screens. Once you have completed installation you will be ready to use the wizard in Visual Studio to create your BizTalk Pipeline Component. Start by creating a new project, selecting BizTalk Projects then BizTalk Server Pipeline Component.  You will then be presented with the splash screen. The next step is General Setup, where you will detail the classname, namespace, pipeline and component types, and the implementation language for your Pipeline Component. The options for pipeline type are Receive, Send or Any. Depending on the pipeline type chosen there are different options presented for the component type, matching those available within the BizTalk Pipelines themselves: Receive - Decoder, Disassembling Parser, Validate, Party Resolver, Any. Send -  Encoder, Assembling Serializer, Any. Any - Any. The options for implementation language are C# or VB.Net Next you must set up the UI settings - these are the settings that affect the appearance of the pipeline component within Visual Studio. You must detail the component name, version, description and icon.  Next is the definition of the variables that the pipeline component will use.  The values for these variables will be defined in Visual Studio when creating a pipeline. The options for each variable you require are: Designer Property - The name of the variable. Data Type - String, Boolean, Integer, Long, Short, Schema List, Schema With None Clicking finish now will complete the wizard stage of the creation of your pipeline component. Once the wizard has completed you will be left with a BizTalk Server Pipeline Component project containing a skeleton code file for you to complete.   Within this code file you will mainly be interested in the execute method, which is left mostly empty ready for you to implement your custom pipeline code:          #region IComponent members         /// <summary>         /// Implements IComponent.Execute method.         /// </summary>         /// <param name="pc">Pipeline context</param>         /// <param name="inmsg">Input message</param>         /// <returns>Original input message</returns>         /// <remarks>         /// IComponent.Execute method is used to initiate         /// the processing of the message in this pipeline component.         /// </remarks>         public Microsoft.BizTalk.Message.Interop.IBaseMessage Execute(Microsoft.BizTalk.Component.Interop.IPipelineContext pc, Microsoft.BizTalk.Message.Interop.IBaseMessage inmsg)         {             //             // TODO: implement component logic             //             // this way, it's a passthrough pipeline component             return inmsg;         }         #endregion Once you have implemented your custom code, build and compile your Custom Pipeline Component then add the compiled .dll to C:\Program Files\Microsoft BizTalk Server 2009\Pipeline Components . When creating a new pipeline, in Visual Studio reset the toolbox and the custom pipeline component should appear ready for you to use in your Biztalk Pipeline. Drop the pipeline component into the relevant pipeline stage and configure the component properties (the variables defined in the wizard). You can now deploy and use the pipeline as you would any other custom pipeline.

    Read the article

  • Data Connection Library in SharePoint Server 2010

    - by Wayne
    What is a Data Connection Library in SharePoint Server 2010? A Data Connection Library in Microsoft SharePoint Server 2010 is a library that can contain two kinds of data connections: an Office Data Connection (ODC) file or a Universal Data Connection (UDC) file. Microsoft InfoPath 2010 uses data connections that comply with the Universal Data Connection (UDC) file schema and typically have either a *.udcx or *.xml file name extension. Data sources described by these data connections are stored on the server and can be used in standard form templates and browser-enabled form templates. How to create a SharePoint Server Data Connection Library? 1. Browse to a SharePoint Server 2010 site on which you have at least Design permissions. If you are on the root site, create a new site before you continue with the next step. 2. On the Site Actions menu, click More Options. 3. On the Create page, click Library under Filter By, and then click Data Connection Library. 4. On the right side of the Create page, type a name for the library, and then click the Create button. 5. Copy the URL of the new data connection library. How to create a new data connection file in InfoPath? 1. Open InfoPath Designer 2010, click Blank Form, and then click Design Form. 2. On the Data tab, click Data Connections, and then click Add. 3. In the Data Connection Wizard, click Create a new connection to, click Receive data, and then click Next. 4. Click the kind of data source that you are connecting to, such as Database, Web service, or SharePoint library or list, and then click Next. 5. Complete the remaining steps in the Data Connection Wizard to configure your data connection, and then click Finish to return to the Data Connections dialog box. 6. In the Data Connections dialog box, click Convert to Connection File. 7. In the Convert Data Connection dialog box, enter the URL of the data connection library that you previously copied (delete "Forms/AllItems.aspx" and anything following it from the URL), enter a name for the data connection file at the end of the URL, and then click OK. It will take a few moments to convert and save the data connection file to the library. 8. Confirm that the data connection was converted successfully by examining the Details section of the Data Connections dialog box while the name of the converted data connection is selected. 9. Browse to the SharePoint data connection library, click the drop-down next to the name of the data connection, click Approve/Reject, click Approved, and then click OK. Take advantage of SharePoint Data Connection Library and other useful features of SharePoint family of products that include, SharePoint Foundation 2010, MOSS 2007 and free SharePoint templates or web parts.

    Read the article

  • Thoughts on Thoughts on TDD

    Brian Harry wrote a post entitled Thoughts on TDD that I thought I was going to let lie, but I find that I need to write a response. I find myself in agreement with Brian on many points in the post, but I disagree with his conclusion. Not surprisingly, I agree with the things that he likes about TDD. Focusing on the usage rather than the implementation is really important, and this is important whether you use TDD or not. And YAGNI was a big theme in my Seven Deadly Sins of Programming series. Now, on to what he doesnt like. He says that he finds it inefficient to have tests that he has to change every time he refactors. Here is where we part company. If you are having to do a lot of test rewriting (say, more than a couple of minutes work to get back to green) *often* when you are refactoring your code, I submit that either you are testing things that you dont need to test (internal details rather than external implementation), your code perhaps isnt as decoupled as it could be, or maybe you need a visit to refactorers anonymous. I also like to refactor like crazy, but as we all know, the huge downside of refactoring is that we often break things. Important things. Subtle things. Which makes refactoring risky. *Unless* we have a set of tests that have great coverage. And TDD (or Example-based Design, which I prefer as a term) gives those to us. Now, I dont know what sort of coverage Brian gets with the unit tests that he writes, but I do know that for the majority of the developers Ive worked with and I count myself in that bucket the coverage of unit tests written afterwards is considerably inferior to the coverage of unit tests that come from TDD. For me, it all comes down to the answer to the following question: How do you ensure that your code works now and will continue to work in the future? Im willing to put up with a little efficiency on the front side to get that benefit later. Its not the writing of the code thats the expensive part, its everything else that comes after. I dont think that stepping through test cases in the debugger gets you what you want. You can verify what the current behavior is, sure, and do it fairly cheaply, but you dont help the guy in the future who doesnt know what conditions were important if he has to change your code. His second part that he doesnt like backing into an architecture (go read to see what he means). Ive certainly had to work with code that was like this before, and its a nightmare the code that nobody wants to touch. But thats not at all the kind of code that you get with TDD, because if youre doing it right youre doing the write a failing tests, make it pass, refactor approach. Now, you may miss some useful refactorings and generalizations for this, but if you do, you can refactor later because you have the tests that make it safe to do so, and your code tends to be easy to refactor because the same things that make code easy to write unit tests for make it easy to refactor. I also think Brian is missing an important point. We arent all as smart as he is. Im reminded a bit of the lesson of Intentional Programming, Charles Simonyis paradigm for making programming easier. I played around with Intentional Programming when it was young, and came to the conclusion that it was a pretty good thing if you were as smart as Simonyi is, but it was pretty much a disaster if you were an average developer. In this case, TDD gives you a way to work your way into a good, flexible, and functional architecture when you dont have somebody of Brians talents to help you out. And thats a good thing.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • SQL SERVER – Performance Tuning Resolution

    - by pinaldave
    This blog post is written in response to T-SQL Tuesday hosted by MidnightDBAs. Taking resolutions is such an interesting subject. I think just like records, these are broken way more often. I find this is the funniest thing as we all take resolutions every year but not every year, we can manage to keep them. Well, does it mean we should not take resolutions? In fact I support resolutions. Every year, I take a resolution that I will strive reduce my body weight and I usually manage to keep eating healthy till the end of January. When February begins, I begin to loose focus from my goal and as March starts, the “As usual” eating habits begin. Looking at the positive side, what would happen if every year I do not eat healthy in January, I think that might cause terrible consequences to my health in the long run. So keeping resolutions is a good practise and following them to the extent one can is commendable. Let us come back to the world of SQL Server. What is my resolution for year 2011 for SQL Server? There are many, I am going to list three of very important resolutions that I have taken this new year over here. To understand SQL Server Performance Tuning at a deeper Level I think I am already half way through. I have been being very much busy during any given month doing hands-on performance tuning for at least 12 days on an average. That means, I am doing this activity for almost doing 2 weeks a month. I believe that I have a good understanding of the subject. Note that the word that I have used is “good,” and not “best.” There are often cases when I am stumped, and I have no clue of what to do next. Then, I usually go for my “trial and error” method - whichever method works, I make sure to keep a note on my blog. My goal is that I should never ever go for the trial and error method again to achieve the same solution. I should know the solution right away when I see the problem. I do understand that Performance Tuning can be a strange animal at times and one cannot guess the right step every time. However, aiming a high goal never hurts and I am going to learn more and more in this focused area. Going further from Basic BI understanding I do fairly decent with BI concepts. I know the nbasics of SSIS, SSRS, SSAS, PowerPivot and SharePoint (and few other things MDS, StreamInsight, etc). However, I still consider myself as a beginner. I do not have hands-on experience like many other BI Gurus around. I think I want to take my learning further in this direction. I do not want to be a BI expert as the first step but the goal is to move ahead from basic level towards an advanced level. I am going to start presenting in User Group Sessions and other places on this subject. When I have to prepare new subject for presentations, I think I force myself to learn more. I am committed to learn a bit more in this direction. Learning new features SQL Server 2011 Denali This is new thing from “Microsoft” for all the SQL Geeks. I am eagerly waiting for final product later this year and I am planning to learn it well. I think if I follow my above two goals, I think this goal will be automatically covered. I am eager and excited for this new offering from Microsoft. I guess, these are my resolutions; may be next year about the same time, I must revisit this post and see how much successful I am in following my goal. On a lighter note, I am particularly fan of following cartoon strip (Courtesy: Calvin and Hobbes). I think when we cannot resolve our resolutions, we tend to act like Calvin. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Functional/nonfunctional requirements VS design ideas

    - by Nicholas Chow
    Problem domain Functional requirements defines what a system does. Non-Functional requirements defines quality attributes of what the system does as a whole.(performance, security, reliability, volume, useability, etc.) Constraints limits the design space, they restrict designers to certain types of solutions. Solution domain Design ideas , defines how the system does it. For example a stakeholder need might be we want to increase our sales, therefore we must improve the usability of our webshop so more customers will purchase, a requirement can be written for this. (problem domain) Design takes this further into the solution domain by saying "therefore we want to offer credit card payments in addition to the current prepayment option". My problem is that the transition phase from requirement to design seems really vague, therefore when writing requirements I am often confused whether or not I incorporated design ideas in my requirements, that would make my requirement wrong. Another problem is that I often write functional requirements as what a system does, and then I also specify in what timeframe it must be done. But is this correct? Is it then a still a functional requirement or a non functional one? Is it better to seperate it into two distinct requirements? Here are a few requirements I wrote: FR1 Registration of Organizer FR1 describes the registration of an Organizer on CrowdFundum FR1.1 The system shall display a registration form on the website. FR1.2 The system shall require a Name, Username, Document number passport/ID card, Address, Zip code, City, Email address, Telephone number, Bank account, Captcha code on the registration form when a user registers. FR1.4 The system shall display an error message containing: “Registration could not be completed” to the subscriber within 1 seconds after the system check of the registration form was unsuccessful. FR1.5 The system shall send a verification email containing a verification link to the subscriber within 30 seconds after the system check of the registration form was successful. FR1.6 The system shall add the newly registered Organizer to the user base within 5 seconds after the verification link was accessed. FR2 Organizer submits a Project FR2 describes the submission of a Project by an Organizer on CrowdFundum - FR2 The system shall display a submit Project form to the Organizer accounts on the website.< - FR2.3 The system shall check for completeness the Name of the Project, 1-3 Photo’s, Keywords of the Project, Punch line, Minimum and maximum amount of people, Funding threshold, One or more reward tiers, Schedule of when what will be organized, Budget plan, 300-800 Words of additional information about the Project, Contact details within 1 secondin after an Organizer submits the submit Project form. - FR2.8 The system shall add to the homepage in the new Projects category the Project link within 30 seconds after the system made a Project webpage - FR2.9 The system shall include in the Project link for the homepage : Name of the Project, 1 Photo, Punch line within 30 seconds after the system made a Project webpage. Questions: FR 1.1 : Have I incorporated a design idea here, would " the system shall have a registration form" be a better functional requirement? F1.2 ,2.3 : Is this not singular? Would the conditions be better written for each its own separate requirement FR 1.4: Is this a design idea? Is this a correct functional requirement or have I incorporated non functional(performance) in it? Would it be better if I written it like this: FR1 The system shall display an error message when check is unsuccessful. NFR: The system will respond to unsuccesful registration form checks within 1 seconds. Same question with FR 2.8 and 2.9. FR2.3: The system shall check for "completeness", is completeness here used ambigiously? Should I rephrase it? FR1.2: I added that the system shall require a "Captcha code" is this a functional requirement or does it belong to the "security aspect" of a non functional requirement. I am eagerly waiting for your response. Thanks!

    Read the article

  • Creating a branch for every Sprint

    - by Martin Hinshelwood
    There are a lot of developers using version control these days, but a feature of version control called branching is very poorly understood and remains unused by most developers in favour of Labels. Most developers think that branching is hard and complicated. Its not! What is hard and complicated is a bad branching strategy. Just like a bad software architecture a bad branch architecture, or one that is not adhered to can prove fatal to a project. We I was at Aggreko we had a fairly successful Feature branching strategy (although the developers hated it) that meant that we could have multiple feature teams working at the same time without impacting each other. Now, this had to be carefully orchestrated as it was a Business Intelligence team and many of the BI artefacts do not lend themselves to merging. Today at SSW I am working on a Scrum team delivering a product that will be used by many hundreds of developers. SSW SQL Deploy takes much of the pain out of upgrading production databases when you are not using the Database projects in Visual Studio. With Scrum each Scrum Team works for a fixed period of time on a single sprint. You can have one or more Scrum Teams involved in delivering a product, but all the work must be merged and tested, ready to be shown to the Product Owner at the the Sprint Review meeting at the end of the current Sprint. So, what does this mean for a branching strategy? We have been using a “Main” (sometimes called “Trunk”) line and doing a branch for each sprint. It’s like Feature Branching, but with only ONE feature in operation at any one time, so no conflicts Figure: DEV folder containing the Development branches.   I know that some folks advocate applying a Label at the start of each Sprint and then rolling back if you need to, but I have always preferred the security of a branch. Like: being able to create a release from Main that has Sprint3 code even while Sprint4 is being worked on. being sure I can always create a stable build on request. Being able to guarantee a version (labels are not auditable) Be able to abandon the sprint without having to delete the code (rare I know, but would be a mess if it happened) Being able to see the flow of change sets through to a safe release It helps you find invalid dependencies when merging to Main as there may be some file that is in everyone’s Sprint branch, but never got checked in. (We had this at the merge of Sprint2) If you are always operating in this way as a standard it makes it easier to then add more scrum teams in the future. Muscle memory of this way of working. Don’t Like: Additional DB space for the branches Baseless merging between sprint branches when changes are directly ported Note: I do not think we will ever attempt this! Maybe a bit tougher to see the history between sprint branches since the changes go up through Main and down to another sprint branch Note: What you would have to do is see which Sprint the changes were made in and then check the history he same file in that Sprint. A little bit of added complexity that you would have to do anyway with multiple teams. Over time, you can end up with a lot of old unused sprint branches. Perhaps destroy with /keephistory can help in this case. Note: We ALWAYS delete the Sprint branch after it has been merged into Main. That is the theory anyway, and as you can see from the images Sprint2 has already been deleted. Why take the chance of having a problem rolling back or wanting to keep some of the code, when you can just abandon a branch and start a new one? It just seems easier and less painful to use a branch to me! What do you think?   Technorati Tags: TFS,TFS2010,Software Development,ALM,Branching

    Read the article

  • Change The Windows 7 Start Orb the Easy Way

    - by Matthew Guay
    Want to make your Windows 7 PC even more unique and personalized?  Then check out this easy guide on how to change your start orb in Windows 7. Getting Started First, download the free Windows 7 Start Button Changer (link below), and extract the contents of the folder.  It contains the app along with a selection of alternate start button orbs you can try out.   Before changing the start button, we advise creating a system restore point in case anything goes wrong.  Enter System Restore in your Start menu search, and select “Create a restore point”. Please note:  We tested this on both the 32 bit and 64 bit editions of Windows 7, and didn’t encounter any problems or stability issues.  That said, it is always prudent to make a restore point just in case a problem did happen. Click the Create button… Then enter a name for the restore point, and click Create. Changing the Start Orb. Once this is finished, run the Windows 7 Start Button Changer as administrator by right-clicking on it and selecting “Run as administrator”.  Accept the UAC prompt that will appear. If you don’t run it as an administrator, you may see the following warning.  Click Quit, and then run again as administrator. You should now see the Windows 7 Start Button Changer.  On the left it shows what your current (default) start orb looks like inactive, when hovered over, and when selected.  Click the orb on the right to select a new start button. Here we browsed to the sample orbs folder, and selected one of them.  Let’s give Windows the Media Center orb for a start orb.  Click the orb you want, and then select open. When you click Open, your screen will momentarily freeze and your taskbar will disappear.  When it reappears, your computer will have gone from having the old, default Start orb style… …to your new, exciting Start orb!  Here it is default, and glowing when hovered over. Now, the Windows 7 Start Orb Changer will change, and show your new Start orb on the left side.  If you would like to revert to the default orb, simply click the folder icon to restore it.  Or, if you would like to change the orb again, restore the original first and then select a new one. The orbs don’t have to be round; here’s a fancy Windows 7 logo as the start button. The start orb change will work in the Aero and Aero basic (which Windows 7 Start uses) themes, but will not show up in the classic, Windows 2000 style themes.  Here’s how the new start button looks with the Aero Classic theme: There are tons of orbs available, including this cute smiley, so choose one that you like to make your computer uniquely yours. Conclusion This is a cute way to make your desktop unique, and can be a great way to make a truly personalized theme.  Let us know your favorite Start orb! Link Download the Windows 7 Start Button Changer Find more Start orbs at deviantART Similar Articles Productive Geek Tips Change the Windows 7 or Vista Power Buttons to Shut Down/Sleep/HibernateQuick Tip: Change the Registered Owner in WindowsSpeed up Windows Vista Start Menu Search By Limiting ResultsWhy Does My Password Expire in Windows?Change Your Computer Name in Windows 7 or Vista TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Quickly Schedule Meetings With NeedtoMeet Share Flickr Photos On Facebook Automatically Are You Blocked On Gtalk? Find out Discover Latest Android Apps On AppBrain The Ultimate Guide For YouTube Lovers Will it Blend? iPad Edition

    Read the article

  • Java 6 Certified with Forms and Reports 10g for EBS 12

    - by John Abraham
    Java 6 is now certified with Oracle Application Server 10g Forms and Reports with Oracle E-Business Suite Release 12 (12.0.6, 12.1.1 and higher). What? Wasn't this already certified? No, but a little background might be useful in understanding why this is a new announcement. We previously certified the use of Java 6 with E-Business Suite Release 12 -- with the sole exception of Oracle Application Server 10g components in the E-Business Suite technology stack. Oracle Application Server 10g originally included Java 1.4.2 as part of its distribution.  E-Business Suite 12 uses, amongst other things, the Oracle Forms and Reports 10g components running on Java 1.4. Java 1.4 in the Oracle Application Server 10g ORACLE_HOME is used exclusively by AS 10g Forms and Reports' for Java functionality.  This version of Java is separate from the Java distribution used by other parts of EBS such as Oracle Containers for Java (OC4J). What's new about this certification? You can now upgrade the older Java 1.4 libraries used by Oracle Forms & Reports 10g to Java 6. This allows you to upgrade the Java releases within the Oracle Application Server 10g ORACLE_HOME to the the same level as the rest of your E-Business Suite technology stack components. Why upgrade? This becomes particularly important for customers as individual vendors' support lifecycle for Java 1.4 reaches End of Life: Oracle's Sun JDK Release 1.4.2's End of Extended Support: February 2013 (Sustaining Support indefinitely after) IBM SDK and JRE 1.4.2's End of Service: September 2013 HP-UX Java 1.4.2's End-of-Life : May 2012 Along with Oracle Forms, Java lies at the heart of the Oracle E-Business Suite.  Small improvements in Java can have significant effects on the performance and stability of the E-Business Suite.  As a notable side-benefit, later versions of Java have improved built-in and third-party tools for JVM performance monitoring and tuning.Our standing recommendation is that you always stay current with the latest available Java update provided by your operating system vendor.  Don't forget to upgrade Forms & Reports to 10.1.2.3 E-Business Suite 12 originally shipped with Oracle Application Server 10g Forms & Reports 10.1.2.0.2.  That version is no longer eligible for Error Correction Support. New Forms and Reports 10g patches are now being released with Forms and Reports 10.1.2.3 as the prerequisite. Forms and Reports 10.1.2.3 was certified for EBS 12 environments in November 2008. If you haven't upgraded your EBS 12 environment to Forms & Reports 10.1.2.3, this is a good opportunity to do so. References Using Latest Update of Java 6.0 with Oracle E-Business Suite Release 12 (My Oracle Support Document 455492.1) Overview of Using Java with Oracle E-Business Suite Release 12 (My Oracle Support Document 418664.1) Oracle Lifetime Support Policy (Oracle Fusion Middleware) IBM Developer Kit Lifecycle Dates HP-UX Java - End of Life Policy & Release Naming Terminology Related Articles OracleAS 10g Forms and Reports 10.1.2.3 Certified With EBS R12 Java 6 Certified with E-Business Suite Release 12

    Read the article

  • AZURE - Stairway To Heaven

    - by Waclaw Chrabaszcz
    Originally posted on: http://geekswithblogs.net/Wchrabaszcz/archive/2014/08/02/azure---stairway-to-heaven.aspx  Before you’ll start reading please start to play this song.   OK boys and girls, time get familiar with clouds. Time to become a meteorologist. To be honest I don’t know how to start. Is cloud better or worse than on campus resources … hmm … it is just different. I think for successful adoption in cloud world IT Dinosaurs need to forget some “Private Cloud” virtualization bad habits, and learn new way of thinking. Take a look: - I don’t need any  tapes or  CDs  (Physical Kingdom of Windows XP and 2000) - I don’t need any locally stored MP3s (CD virtualization :-) - I can just stream music to your computer no matter whether my on-site infrastructure is powered on. Why not to do exactly the same with WebServer, SQL, or just rented for a while Windows server ? Let’s go, to the other side of the mirror. 1st  - register yourself for free one month trial, as happy MSDN subscriber you’ve got monthly budget to spent. In addition in default setting your limit protects you against loosing real money, if your toys will consume too much traffic and space. http://azure.microsoft.com/en-us/pricing/free-trial/ Once your account is ready forget WebPortal, we are PowerShell knights. http://go.microsoft.com/?linkid=9811175&clcid=0x409 #Authenticate yourself in Azure Add-AzureAccount #download once your settings file Get-AzurePublishSettingsFile #Import it to your PowerShell Module Import-AzurePublishSettingsFile "C:\Azure\[filename].publishsettings" #validation Get-AzureAccount Get-AzureSubscription #where are Azure datacenters Get-AzureLocation #You will need it Update-Help #storage account is related to physical location, there are two datacenters on each continent, try nearest to you # all your VMs will store VHD files on your storage account #your storage account must be unique globally, so I assume that words account or server are already used New-AzureStorageAccount -StorageAccountName "[YOUR_STORAGE_ACCOUNT]" -Label "AzureTwo" -Location "West Europe" Get-AzureStorageAccount #it looks like you are ready to deploy first VM, what templates we can use Get-AzureVMImage | Select ImageName #what a mess, let’s choose Server 2012 $ImageName = (Get-AzureVMImage)[74].ImageName $cloudSvcName = '[YOUR_STORAGE_ACCOUNT]' $AdminUsername = "[YOUR-ADMIN]" $adminPassword = '[YOUR_PA$$W0RD]' $MediaLocation = "West Europe" $vmnameDC = 'DC01' #burn baby burn !!! $vmDC01 = New-AzureVMConfig -Name $vmnameDC -InstanceSize "Small" -ImageName $ImageName   `     | Add-AzureProvisioningConfig -Windows -Password $adminPassword -AdminUsername $AdminUsername   `     | New-AzureVM -ServiceName $cloudSvcName #ice, ice baby … Get-AzureVM Get-AzureRemoteDesktopFile -ServiceName "[YOUR_STORAGE_ACCOUNT]" -Name "DC01" -LocalPath "c:\AZURE\DC01.rdp" As you can see it is not just a new-VM, you need to associate your VM with AzureVMConfig (it sets your template), AzureProvisioningConfig (it sets your customizations), and Storage account. In next releases you’ll need to put this machine in specific subnet, attach a HDD and many more. After second reading I found that I am using the same name for STORAGE and SERVICE account, please be aware of it if you need to split these values. Conclusions: - pipe rules ! - at the beginning it is hard to change your mind and agree with fact that it is easier to remove and recreate a VM than move it to different subnet - by default everything is firewalled, limited access to DNS, but NATed outside on custom ports. It is good to check these translations sometimes on the webportal. - if you remove your VMs your harddrives remains on storage and MS will charge you . Remove-AzureVM -DeleteVHD For me AZURE it is a lot of fun, once again I can be newbie and learn every page. For me Azure offers real freedom in deployment of VMs without arguing with NetAdmins, WinAdmins, DBAs, PMs and other Change Managers. Unfortunately soon or later they will come to my haven and change it into …

    Read the article

  • Oracle Enterprise Manager Cloud Control 12c Release 2 (12.1.0.2) Now Available!

    - by Javier Puerta
    Oracle Enterprise Manager Cloud Control 12c Release 2 (12.1.0.2) is now available on OTN on ALL platforms. This is the first major release since the launch of Enterprise Manager 12c in October of 2011 and the first ever Enterprise Manager release available on all platforms simultaneously. This is primarily a stability release which incorporates many of issues and feedback reported by early adopters. In addition, this release contains many new features and enhancements in areas across the board.   New Capabilities and Features   Enhanced management capabilities for enterprise private clouds: Introduces new capabilities to allow customers to build and manage a Java Platform-as-a-Service (PaaS) cloud based on Oracle Weblogic Server. The new capabilities include guided set up of PaaS Cloud, self-service provisioning, automatic scale out and metering and chargeback. Enhanced lifecycle management capabilities for Oracle WebLogic Server environments: Combining in-context multiple domain, patching and configuration file synchronizations. Integrated Hardware-Software management for Oracle Exalogic Elastic Cloud through features such as rack schematics visualization and integrated monitoring of all hardware and software components. The latest management capabilities for business-critical applications include: Business Application Management: A new Business Application (BA) target type and dashboard with flexible definitions provides a logical view of an application’s business transactions, end-user experiences and the cloud infrastructure the monitored application is running on. Enhanced User Experience Reporting: Oracle Real User Experience Insight has been enhanced to provide reporting capabilities on client-side issues for applications running in the cloud and has been more tightly coupled with Oracle Business Transaction Management to help ensure that real-time user experience and transaction tracing data is provided to users in context. Several key improvements address ease of administration, reporting and extensibility for massively scalable cloud environments including dynamic groups, self-updateable monitoring templates, bulk operations against many events, etc. New and Revised Plug-Ins:   Several plug-Ins have been updated as a part of this release resulting in either new versions or revisions. Revised plug-ins contain only bug-fixes and while new plug-ins incorporate both bug fixes as well as new functionality.   Plug-In Name Version Enterprise Manager for Oracle Database 12.1.0.2 (revision) Enterprise Manager for Oracle Fusion Middleware 12.1.0.3 (new) Enterprise Manager for Chargeback and Capacity Planning 12.1.0.3 (new) Enterprise Manager for Oracle Fusion Applications 12.1.0.3 (new) Enterprise Manager for Oracle Virtualization 12.1.0.3 (new) Enterprise Manager for Oracle Exadata 12.1.0.3 (new) Enterprise Manager for Oracle Cloud 12.1.0.4 (new) Installation and Upgrade:   All major platforms have been released simultaneously (Linux 32 / 64 bit, Solaris (SPARC), Solaris x86-64, IBM AIX 64-bit, and Windows x86-64 (64-bit) ) Enterprise Manager 12.1.0.2 is a complete release that includes both the EM OMS and Agent versions of 12.1.0.2. Installation options available with EM 12.1.0.2: User can do fresh Install or an upgrade from versions EM 10.2.0.5, 11.1, or 12.1.0.2 ( Bundle Patch 1 not mandatory). Upgrading to EM 12.1.0.2 from EM 12.1.0.1 is not a patch application (similar to Bundle Patch 1) but is achieved through a 1-system upgrade. Documentation:   Oracle Enterprise Manager Cloud Control Introduction Document provides a broad overview of capabilities and highlights"What's New" in EM 12.1.0.2.   All updated Oracle Enterprise Manager documentation can be found on OTN   Customer Webcast - EM 12c Installation and Upgrade: This webcast is for customers who are interested in learning how to successfully deploy or upgrade to EM 12.1.0.2.   Customer Webcast - Installation and Upgrade - September 21(registration and info on OTN starting September 12)   Enterprise Manager 12c R2 Resources:   OTN Download Page Upgrade Guide

    Read the article

  • Reduce ERP Consolidation Risks with Oracle Master Data Management

    - by Dain C. Hansen
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Reducing the Risk of ERP Consolidation starts first and foremost with your Data.This is nothing new; companies with multiple misaligned ERP systems are often putting inordinate risk on their business. It can translate to too much inventory, long lead times, and shipping issues from poorly organized and specified goods. And don’t forget the finance side! When goods are shipped and promises are kept/not kept there’s the issue of accounts. No single chart of counts translates to no accountability. So – I’ve decided. I need to consolidate! Well, you can’t consolidate ERP applications [for that matter any of your applications] without first considering your data. This means looking at how your data is being integrated by these ERP systems, how it is being synchronized, what information is being shared, or not being shared. Most importantly, making sure that the data is mastered. What is the best way to do this? In the recent webcast: Reduce ERP consolidation Risks with Oracle Master Data Management we outlined 3 key guidelines: #1: Consolidate your Product Data#2: Consolidate your Customer, Supplier (Party Data) #3: Consolidate your Financial Data Together these help customers achieve reduced risk, better customer intimacy, reducing inventory levels, elimination of product variations, and finally a single master chart of accounts. In the case of Oracle's customer Zebra Technologies, they were able to consolidate over 140 applications by mastering their data. Ultimately this gave them 60% cost savings for the year on IT spend. Oracle’s Solution for ERP Consolidation: Master Data Management Oracle's enterprise master data management (MDM) can play a big role in ERP consolidation. It includes a set of products that consolidates and maintains complete, accurate, and authoritative master data across the enterprise and distributes this master information to all operational and analytical applications as a shared service. It’s optimized to work with any application source (not just Oracle’s) and can integrate using technology from Oracle Fusion Middleware (i.e. GoldenGate for data synchronization and real-time replication or ODI with its E-LT optimized bulk data and transformation capability). In addition especially for ERP consolidation use cases it’s important to leverage the AIA and SOA capabilities as part of Fusion Middleware to connect these multiple applications together and relay the data into the correct hub. Oracle’s MDM strategy is a unique offering in the industry, one that has common elements across the top and bottom in Middleware, BI/DW, Engineered systems combined with Enterprise Data Quality to enable comprehensive Data Governance at all levels. In addition, Oracle MDM provides the best-in-class capabilities to master all variations of data, including customer, supplier, product, financial data. But ultimately at the center of Oracle MDM is your data, making it more trusted, making it secure and accessible as part of a role-based approach, and getting it to make sense to you in any situation, whether it’s a specific ERP process like we talked about or something that is custom to your organization. To learn more about these techniques in ERP consolidation watch our webcast or goto our Oracle MDM website at www.oracle.com/goto/mdm

    Read the article

  • Upgrading from MVC 1.0 to MVC2 in Visual Studio 2010 and VS2008.

    - by Sam Abraham
    With MVC2 officially released, I was involved in a few conversations regarding the feasibility of upgrading existing MVC 1.0 projects to quickly leverage the newly introduced MVC features. Luckily, Microsoft has proactively addressed this question for both Visual Studio 2008 and 2010 and many online resources discussing the upgrade process are a "Bing/Google Search" away. As I will happen to be speaking about MVC2 and Visual Studio 2010 at the Ft Lauderdale ArcSig .Net User Group Meeting on April 20th 2010 (Check http://www.fladotnet.com for more info.), I decided to include a quick demo on upgrading the NerdDinner project (which I consider the "Hello MVC World" project) from MVC 1.0 to MVC2 using Visual studio 2010 to demonstrate how simple the upgrade process is. In the next few lines, I will be briefly touching on upgrading to MVC2 for Visual Studio 2008 then discussing, in more detail, the upgrade process using Visual Studio 2010 while highlighting the advantage of its multi-targeting support. Using Visual Studio 2008 SP1 For upgrading to MVC2 Using VS2008 SP1, a Microsoft White Paper [1] presents two approaches:  1- Using a provided automated upgrade tool, 2-Manually upgrading the project. I personally prefer using the automated tool although it comes with an "AS IS" disclaimer. For those brave souls, or those who end up with no luck using the tool, detailed manual upgrade steps are also provided as a second option. Backing up the project in question is a must regardless of which route one would take to upgrade. Using Visual Studio 2010 Life is much easier for developers who already adopted Visual Studio 2010. Simply opening the MVC 1.0 solution file brings up the upgrade wizard as shown in figures 1, 2, 3 and 4. As we proceed with the upgrade process, the wizard requests confirmation on whether we choose to upgrade our target framework version to .Net 4.0 or keep the existing .Net 3.5 (Figure 5). VS2010 does a good job with multi-targeting where we can still develop .Net 3.5 applications while leveraging all the new bells and whistles that VS2010 brings to the table (Multi-targeting enables us to develop with as early as .Net 2.0 in VS2010) Figure 1 - Open Solution File Using VS2010   Figure 2 - VS2010 Conversion Wizard Figure 3- Ready To Convert To VS2010 Confirmation Screen Figure 4 - VS2010 Solution Conversion Progress Figure 5 - Confirm Target Framework Upgrade In an attempt to make my demonstration realistic, I decided to opt to keep the project targeted to the .Net 3.5 Framework.  After the successful completion of the conversion process,  a quick sanity check revealed that the NerdDinner project is still targeted to the .Net 3.5 framework as shown in figure 6. Inspecting the Web.Config revealed that the MVC DLL version our code compiles against has been successfully upgraded to 2.0 (Figure 7) and hence we should now be able to leverage the newly introduced features in MVC2 and VS2010 with no effort or time invested on modifying existing code. Figure 6- Confirm Target Framework Remained .Net 3.5  Figure 7 - Confirm MVC DLL Version Has Been Upgraded In Conclusion, Microsoft has empowered developers with the tools necessary to quickly and seamlessly upgrade their MVC solutions to the newly released MVC2. The multi-targeting feature in Visual Studio 2010 enables us to adopt this latest and greatest development tool while supporting development in as early as .Net 2.0. References 1. "Upgrading an ASP.NET MVC 1.0 Application to ASP.NET MVC 2" http://www.asp.net/learn/whitepapers/aspnet-mvc2-upgrade-notes

    Read the article

  • MVC Portable Areas &ndash; Deploying Static Files

    - by Steve Michelotti
    This is the second post in a series related to build and deployment considerations as I’ve been exploring MVC Portable Areas: #1 – Using Web Application Project to build portable areas #2 – Conventions for deploying portable area static files #3 – Portable area static files as embedded resources As I’ve been digging more into portable areas, one of the things I’ve liked best is the deployment story which enables my *.aspx, *.ascx pages to be compiled into the assembly as embedded resources rather than having to maintain all those files separately. In traditional web forms, that was always the thing to prevented developers from utilizing *.ascx user controls across projects (see this post for using portable areas in web forms).  However, though the aspx pages are embedded, the supporting static files (e.g., images, css, javascript) are *not*. Most of the demos available online today tend to brush over this issue and focus solely on the aspx side of things. But to create truly robust portable areas, it’s important to have a good story for these supporting files as well.  I’ve been working with two different approaches so far (of course I’d really like to hear if other people are using alternatives). Scenario For the approaches below, the scenario really isn’t that important. It could be something as trivial as this partial view: 1: <%@ Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl" %> 2: <img src="<%: Url.Content("~/images/arrow.gif") %>" /> Hello World! The point is that there needs to be careful consideration for *any* scenario that links to an external file such as an image, *.css, *.js, etc. In the example shown above, it uses the Url.Content() method to convert to a relative path. But this method won’t necessary work depending on how you deploy your portable area. One approach to address this issue is to build your portable area project with MSDeploy/WebDeploy so that it is packaged properly before incorporating into the host application. All of the *.cs files are removed and the project is ready for xcopy deployment – however, I do *not* need the “Views” folder since all of the mark up has been compiled into the assembly as embedded resources. Now in the host application we create a folder called “Modules” and deploy any portable areas as sub-folders under that: At this point we can add a simple assembly reference to the Widget1.dll sitting in the Modules\Widget1\bin folder. I can now render the portable image in my view like any other portable area. However, the problem with that is that the view results in this:   It couldn’t find arrow.gif because it looked for /images/arrow.gif and it was *actually* located at /images/Modules/Widget1/images/arrow.gif. One solution is to make the physical location of the portable configurable from the perspective of the host like this: 1: <appSettings> 2: <add key="Widget1" value="Modules\Widget1"/> 3: </appSettings> Using the <appSettings> section is a little cheesy but it could be better formalized into its own section. In fact, if were you willing to rely on conventions (e.g., “Modules\{areaName}”) then then config could be eliminated completely. With this config in place, we could create our own Html helper method called Url.AreaContent() that “wraps” the OOTB Url.Content() method while simply pre-pending the area location path: 1: public static string AreaContent(this UrlHelper urlHelper, string contentPath) 2: { 3: var areaName = (string)urlHelper.RequestContext.RouteData.DataTokens["area"]; 4: var areaPath = (string)ConfigurationManager.AppSettings[areaName]; 5:   6: return urlHelper.Content("~/" + areaPath + "/" + contentPath); With these two items in place, we just change our Url.Content() call to Url.AreaContent() like this: 1: <img src="<%: Url.AreaContent("/images/arrow.gif") %>" /> Hello World! and the arrow.gif now renders correctly:     Since we’re just using our own Url.AreaContent() rather than the built-in Url.Content(), this solution works for images, *.css, *.js, or any externally referenced files.  Additionally, any images referenced inside a css file will work provided it’s a relative reference and not an absolute reference. An alternative to this approach is to build the static file into the assembly as embedded resources themselves. I’ll explore this in another post (linked at the top).

    Read the article

  • Acr.ExtDirect &ndash; Part 1 &ndash; Method Resolvers

    - by Allan Ritchie
    One of the most important things of any open source libraries in my opinion is to be as open as possible while avoiding having your library become invasive to your code/business model design.  I personally could never stand marking my business and/or data access code with attributes everywhere.  XML also isn’t really a fav with too many people these days since it comes with a startup performance hit and requires runtime compiling.  I find that there is a whole ton of communication libraries out there currently requiring this (ie. WCF, RIA, etc).  Even though Acr.ExtDirect comes with its own set of attributes, you can piggy-back the [ServiceContract] & [OperationContract] attributes from WCF if you choose.  It goes beyond that though, there are 2 others “out-of-the-box” implementations – Convention based & XML Configuration.    Convention – I don’t actually recommend using this one since it opens up all of your public instance methods to remote execution calls. XML Configuration – This isn’t so bad but requires you enter all of your methods and there operation types into the Castle XML configuration & as I said earlier, XML isn’t the fav these days.   So what are your options if you don’t like attributes, convention, or XML Configuration?  Well, Acr.ExtDirect has its own extension base to give the API a list of methods and components to make available for remote execution.  1: public interface IDirectMethodResolver { 2:   3: bool IsServiceType(ComponentModel model, Type type); 4: string GetNamespace(ComponentModel model); 5: string[] GetDirectMethodNames(ComponentModel model); 6: DirectMethodType GetMethodType(ComponentModel model, MethodInfo method); 7: }   Now to implement our own method resolver:   1: public class TestResolver : IDirectMethodResolver { 2:   3: #region IDirectMethodResolver Members 4:   5: /// <summary> 6: /// Determine if you are calling a service 7: /// </summary> 8: /// <param name="model"></param> 9: /// <param name="type"></param> 10: /// <returns></returns> 11: public bool IsServiceType(ComponentModel model, Type type) { 12: return (type.Namespace == "MyBLL.Data"); 13: } 14:   15: /// <summary> 16: /// Return the calling name for the client side 17: /// </summary> 18: /// <param name="model"></param> 19: /// <returns></returns> 20: public string GetNamespace(ComponentModel model) { 21: return model.Name; 22: } 23:   24: public string[] GetDirectMethodNames(ComponentModel model) { 25: switch (model.Name) { 26: case "Products" : 27: return new [] { 28: "GetProducts", 29: "LoadProduct", 30: "Save", 31: "Update" 32: }; 33:   34: case "Categories" : 35: return new [] { 36: "GetProducts" 37: }; 38:   39: default : 40: throw new ArgumentException("Invalid type"); 41: } 42: } 43:   44: public DirectMethodType GetMethodType(ComponentModel model, MethodInfo method) { 45: if (method.Name.StartsWith("Save") || method.Name.StartsWith("Update")) 46: return DirectMethodType.FormSubmit; 47: 48: else if (method.Name.StartsWith("Load")) 49: return DirectMethodType.FormLoad; 50:   51: else 52: return DirectMethodType.Direct; 53: } 54:   55: #endregion 56: }   And there you have it, your own custom method resolver.  Pretty easy and pretty open ended!

    Read the article

  • Flash Actionscript 3.0 Game Projectile Creation

    - by Christian Basar
    I have been creating a side-scrolling Actionscript 3.0 game. In this game I want the Player to be able to shoot blow darts as weapons. I had some trouble getting the darts to be created in the right place (in front of the player), but eventually got it working with some help from this page (please look at it for background information on this problem): http://stackoverflow.com/questions/8031553/flash-actionscript-3-0-game-projectile-creation I got the darts to be created in the right place (near the player) and a 'movePlayerDarts()' function moves them. But I actually have a new problem. When the player moves after firing a dart, the dart tries to follow him! If the player jumps, the dart rises up. If the player moves to the left, the dart moves slightly to the left. Obviously, there is some code somewhere which is telling the darts to follow the player. I do not see how, unless the 'playerDartContainer' has something to do with that. But the container is always at position (0,0) and it does not move. Also, as a test I traced a dart's 'y' coordinate within the constantly-running 'movePlayerDarts()' function. As you can see, that function constantly moves the dart down the y axis by increasing its y-coordinate value. But when I jump, the 'y' coordinate being traced is never reduced, even though the dart clearly looks like it's rising! If anybody has any suggestions, I'd appreciate them! Here is the code I use to create the darts: // This function creates a dart public function createDart():void { if (playerDartContainer.numChildren <= 4) { // Play dart shooting sound sndDartShootIns.play(); // Create a new 'PlayerDart' object playerDart = new PlayerDart(); // Set the new dart's initial position and direction depending on the player's direction // Player's facing right if (player.scaleX == 1) { // Create dart in front of player's dart gun playerDart.x = player.x + 12; playerDart.y = player.y - 85; // Dart faces right, too playerDart.scaleX = 1; } // Player's facing left else if (player.scaleX == -1) { // Create dart in front of player's dart gun playerDart.x = player.x - 12; playerDart.y = player.y - 85; // Dart faces left, too playerDart.scaleX = -1; } playerDartContainer.addChild(playerDart); } } // End of 'createDart()' function This code is the EnterFrameHandler for the player darts: // In every frame, call 'movePlayerDarts()' to move the darts within the 'playerDartContainer' public function playerDartEnterFrameHandler(event:Event):void { // Only move the Player's darts if their container has at least one dart within if (playerDartContainer.numChildren > 0) { movePlayerDarts(); } } And finally, this is the code that actually moves all of the player's darts: // Move all of the Player's darts public function movePlayerDarts():void { for (var pdIns:int = 0; pdIns < playerDartContainer.numChildren; pdIns++) { // Set the Player Dart 'instance' variable to equal the current PlayerDart playerDartIns = PlayerDart(playerDartContainer.getChildAt(pdIns)); // Move the current dart in the direction it was shot. The dart's 'x-scale' // factor is multiplied by its speed (5) to move the dart in its correct // direction. If the 'x-scale' factor is -1, the dart is pointing left (as // seen in the 'createDart()' function. (-1 * 5 = -5), so the dart will go // to left at a rate of 5. The opposite is true for the right-ward facing // darts playerDartIns.x += playerDartIns.scaleX * 1; // Make gravity work on the dart playerDartIns.y += 0.7; //playerDartIns.y += 1; // What if the dart hits the ground? if (HitTest.intersects(playerDartIns, floor, this)) { playerDartContainer.removeChild(playerDartIns); } //trace("Dart x: " + playerDartIns.x); trace("Dart y: " + playerDartIns.y); } }

    Read the article

  • ArchBeat Link-o-Rama Top 10 for November 4-10, 2012

    - by Bob Rhubart
    The Top 10 most popular items shared via the OTN ArchBeat Facebook Page for the week of November 4-10, 2012. OAM/OVD JVM Tuning | @FusionSecExpert Vinay from the Oracle Fusion Middleware Architecture Group (the very prolific A-Team) shares a process for analyzing and improving performance in Oracle Virtual Directory and Oracle Access Manager. Exploring Lambda Expressions for the Java Language and the JVM | Java Magazine In the latest //Java/Architect column in Java Magazine, Ben Evans, Martijn Verburg, and Trisha Gee explain how, "although Lambda expressions might seem unfamiliar to begin with, they're quite easy to pick up, and mastering them will be vital for writing applications that can take full advantage of modern multicore CPUs." SOA Galore: New Books for Technical Eyes Only Shake up up your technical skills with this trio of new technical books from community members covering SOA and BPM. Oracle Solaris 11.1 update focuses on database integration, cloud | Mark Fontecchio TechTarget editor Mark Fontecchio reports on the recent Oracle Solaris 11.1 release, with comments from IDC's Al Gillen. Solving Big Problems in Our 21st Century Information Society | Irving Wladawsky-Berger "I believe that the kind of extensive collaboration between the private sector, academia and government represented by the Internet revolution will be the way we will generally tackle big problems in the 21st century. Just as with the Internet, governments have a major role to play as the catalyst for many of the big projects that the private sector will then take forward and exploit. The need for high bandwidth, robust national broadband infrastructures is but one such example." — Irving Wladawsky-Berger ADF Mobile Custom Javasciprt – iFrame Injection | John Brunswick The ADF Mobile Framework provides a range of out of the box components to add within your AMX pages, according to John Brunswick. But what happens when "an out of the box component does not directly fulfill your development need? What options are available to extend your application interface?" John has an answer. Architects Matter: Making sense of the people who make sense of enterprise IT Why do architects matter? Oracle Enterprise Architect Eric Stephens suggests that you ask yourself this question the next time you take the elevator to the Oracle offices on the 45th floor of the Willis Tower in Chicago, Illinois (or any other skyscraper, for that matter). If you had to take the stairs to get to those offices, who would you blame? "You get the picture," he says. "Architecture is essential for any necessarily complex structure, be it a building or an enterprise." (Read the article...) Converting SSL certificate generated by a 3rd party to an Oracle Wallet | Paulo Albuquerque Oracle Fusion Middleware A-Team member Paulo Albuquerque shares "a workaround to get your private key, certificate and CA trusted certificates chain into Oracle Wallet." How Data and BPM are married to get the right information to the right people at the right time | Leon Smiers "Business Process Management…supports a large group of stakeholders within an organization, all with different needs," says Oracle ACE Leon Smiers. "End-to-end processes typically run across departments, stakeholders and applications, and can often have a long life-span. So how do organizations provide all stakeholders with the information they need?" Leon provides answers in this post. Updated Business Activity Monitoring (BAM) Class | Gary Barg Oracle SOA Team blogger Gary Barg has news for those interested in a skills upgrade. This updated Oracle University course "explains how to use Oracle BAM to monitor enterprise business activities across an enterprise in real time. You can measure your key performance indicators (KPIs), determine whether you are meeting service-level agreements (SLAs), and take corrective action in real time." Thought for the Day "For every complex problem, there is a solution that is simple, neat, and wrong." — H. L. Mencken (September 12, 1880 – January 29, 1956) Source: SoftwareQuotes.com

    Read the article

  • Silverlight Cream for February 06, 2011 -- #1042

    - by Dave Campbell
    In this Issue: Mike Taulty, Timmy Kokke, Laurent Bugnion, Arik Poznanski, Deyan Ginev, Deborah Kurata(-2-), Johnny Tordgeman, Roy Dallal, Jaime Rodriguez, Samuel Jack(-2-), James Ashley. Above the Fold: Silverlight: "Customizing Silverlight properties for Visual Designers" Timmy Kokke WP7: "Back button press when using webbrowser control in WP7" Jaime Rodriguez Expression Blend: "Blend Bits 21–Importing from Photoshop & Illustrator…" Mike Taulty From SilverlightCream.com: Blend Bits 21–Importing from Photoshop & Illustrator… Mike Taulty is up to 21 episodes on his Blend Bits sequence now, and this one is about using Blend's import capability, such as a .psd file with all the layers intact. Customizing Silverlight properties for Visual Designers Timmy Kokke has part 1 of 2 parts on making your Silverlight control properties in design surfaces such as Visual Studio designer or Expression Blend. An error when installing MVVM Light templates for VS10 Express Laurent Bugnion has released a new version of MVVMLight that resolves a problem with VS2010 Express version of the templates... no problem with anything else. Reading RSS items on Windows Phone 7 Arik Poznanski has a post up about reading RSS on a WP7, but better yet, he also has code for a helper class that you can grab, plus explanation of wiring it up. Integrating your Windows Phone unit tests with MSBuild #4: The WP7 Unit Test Application Deyan Ginev has a post up about Telerik's WP7 test app that outputs test results in XML from the emulator so they can be integrated with the MSBuild log. Accessing Data in a Silverlight Application: EF I apprently missed this post by Deborah Kurata last week on bringing data into your Silverlight app via Entity Frameworks... good detailed tutorial in VB and C#. Updating Data in a Silverlight Application: EF In Deborah Kurata's latest post, she is continuing with Entity Frameworks by demonstrating updating to the database... full source code will be produced in a later post. Fun with Silverlight and SharePoint 2010 Ribbon Control - Part 2 - An In Depth Look At The Ribbon Control Johnny Tordgeman has Part 2 of his Silverlight and Sharepoint 2010 Ribbon up... taking a deep-dive into the ribbon... great explanation of the attributes, code included. Geographic Coordinates Systems Roy Dallal has some Geo code up that's not necessarily Silverlight, but very cool if you're doing any GIS programming... ya gotta know the coordinate systems! Back button press when using webbrowser control in WP7 Jaime Rodriguez has a post up discussing the much-lamented back-button action in the certification requirements and how to deal with that in a web browser app. Multiplayer-enabling my Windows Phone 7 game: Day 1 Samuel Jack challenged himself to build a WP7 game in 3 days... now he's challenging himself to make it multiplayer in 3 days... this first hour-to-hour post is research of networking and an azure server-side solution. Multiplayer-enabling my Windows Phone 7 game: Day 2–Building a UI with XPF Day 2 for Samuel Jack getting the multiplayer portion of his game working in 3 days.. this day involves getting up-to-speed with XPF. How to Hotwire your WP7 Phone Battery Did you realize if you run your WP7 battery completely down that you can't charge it? James Ashley reports that circumstance, and how he resolved it. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Kill all the project files!

    - by jamiet
    Like many folks I’m a keen podcast listener and yesterday my commute was filled by listening to Scott Hunter being interviewed on .Net Rocks about the next version of ASP.Net. One thing Scott said really struck a chord with me. I don’t remember the full quote but he was talking about how the ASP.Net project file (i.e. the .csproj file) is going away. The rationale being that the main purpose of that file is to list all the other files in the project, and that’s something that the file system is pretty good at. In Scott’s own words (that someone helpfully put in the comments): A file that lists files is really redundant when the OS already does this Romeliz Valenciano correctly pointed out on Twitter that there will still be a project.json file however no longer will there be a need to keep a list of files in a project file. I suspect project.json will simply contain a list of exclusions where necessary rather than the current approach where the project file is a list of inclusions. On the face of it this seems like a pretty good idea. I’ve long been a fan of convention over configuration and this is a great example of that. Instead of listing all the files in a separate file, just treat all the files in the directory as being part of the project. Ostensibly the approach is if its in the directory, its part of the project. Simple. Now I’m not an ASP.net developer, far from it, but it did occur to me that the same approach could be applied to the two Visual Studio project types that I am most familiar with, SSIS & SSDT. Like many people I’ve long been irritated by SSIS projects that display a faux file system inside Solution Explorer. As you can see in the screenshot below the project has Miscellaneous and Connection Managers folders but no such folders exist on the file system: This may seem like a minor thing but it means useful Solution Explorer features like Show All Files and Open Folder in Windows Explorer don’t work and quite frankly it makes me feel like a second class citizen in the Microsoft ecosystem. I’m a developer, treat me like one. Don’t try and hide the detail of how a project works under the covers, show it to me. I’m a big boy, I can handle it! Would it not be preferable to simply treat all the .dtsx files in a directory as being part of a project? I think it would, that’s pretty much all the .dtproj file does anyway (that, and present things in a non-alphabetic order – something else that wildly irritates me), so why not just get rid of the .dtproj file? In the case of SSDT the .sqlproj actually does a whole lot more than simply list files because it also states the BuildAction of each file (Build, NotInBuild, Post-Deployment, etc…) but I see no reason why the convention over configuration approach can’t help us there either. Want to know which is the Post-deployment script? Well, its the one called Post-DeploymentScript.sql! Simple! So that’s my new crusade. Let’s kill all the project files (well, the .dtproj & .sqlproj ones anyway). Are you with me? @Jamiet

    Read the article

< Previous Page | 682 683 684 685 686 687 688 689 690 691 692 693  | Next Page >