Search Results

Search found 5545 results on 222 pages for 'future'.

Page 106/222 | < Previous Page | 102 103 104 105 106 107 108 109 110 111 112 113  | Next Page >

  • What deployment framework to use?

    - by jeruki
    We are trying to figure out what deployment method/framework to use with a python application, it has a basic wsgi server to make some REST resources available and a set of static web pages with the interface that are served through apache. The situation is as follows: My team works in isolated parts of the program and sometimes together in specific modules, we have different testing servers and one master server, we all work locally, sync the code using git, and then run a bash script that copies the files from the windows machines to the indicated linux server(using ssh) and then restarts the app. After thinking about it this doesn't seem to be the right way to do it, the script overwrites all the files in the server with the local files everytime. We want to be able to work in the same server without the worry of overwriting other people's code and we need to deploy to different servers to avoid restarting the service while others work with it and in the near future we need to deploy to the master or several clones of the master server when the application reaches a more mature state. We found serveral options capistrano, kwate, chef or fortress, even fleet but we wanted to have opinions from people that has used them to be sure it is what we need. So this are the main questions: Are these the kind of programs we should be looking at to achive a safe concurrent deployment process? Which one have you used/recommend and why? do you think it would help in our actual situation? Thank you so much for your feedback and advice on this.

    Read the article

  • links for 2011-02-17

    - by Bob Rhubart
    ArchitectACEs - Oracle Wiki Putting a Face on the Architect ACE The Oracle ACE s listed here have identified themselves, or have been identified by fellow ACEs, as software architects. As... (tags: ping.fm) Debra's thoughts on Oracle and User Groups: I did it - I did the Fusion UX Demo Oracle ACE Director Debra Lilley shares her experience in presenting a Fusion Applications demo at RMOUG. (tags: oracle otn oracleace) The Blas from Pas: JRuby Script to Monitor a Oracle WebLogic GridLink Data Source Remotely "In WebLogic 10.3.4 release, a single data source implementation has been introduced to support Oracle RAC cluster. To simplify and consolidate its support for Oracle RAC, WebLogic Server has provided a single data source that is enhanced to support the capabilities of Oracle RAC." (tags: oracle otn weblogic) Show Notes: Bob Hensle on IT Strategies from Oracle (ArchBeat) In Part 1 Bob Hensle talked about the various documents in the IT Strategies from Oracle library. In Part 2 (now available) Bob talks about how SOA and other factors are reflected in those documents. (tags: oracle otn entarch podcast) PODCAST: Examining the state of EA and findings of recent survey | Open Group Blog A transcript of a podcast panel discussion on the findings from a study on the current state and future direction of enterprise architecture from The Open Group Conference, San Diego 2011. (tags: entarch opengroup) A Virtual Dilemma (Antony Reynolds' Blog) SOA author Anthony Reynolds shares a solution. (tags: oracle otn soa) Webcast: Live Online Forum: Oracle Security - February 24, 9:00am PT Speakers: Mary Ann Davidson, Chief Security Officer, Oracle; Tom Kyte, Senior Technical Architect, Oracle; Jeff Margolies, Partner, Security Practice, Accenture; Vipin Samar, VP, Database Security Product Development Oracle; and Nishant Kaushik, Chief Strategist, Identity and Access Management. (tags: oracle security) Obama banks on cloud, consolidation, to hold down IT costs | Computerworld NZ President Obama's fiscal 2012 budget proposal keeps IT spending almost flat compared to fiscal 2010 mostly due to the consolidation of data centers and a shift to cloud computing systems. (tags: ping.fm)

    Read the article

  • Questions for Oracle GlassFish and Middleware Executives

    - by arungupta
    GlassFish Community Event is planned, as part of JavaOne, on Sep 30, 2012. If you are involved in the GlassFish community, this is a perfect opportunity to engage with the Oracle GlassFish Team. Agenda 11:00 - 11:05: Introduction 11:05 - 11:30: Roadmap and Community Updates 11:30 - 12:15: Q&A with Executive Speaker Panel from Oracle and the GlassFish Team 12:15 - 01:00: Customer Testimonials Location: Moscone West, Room 2005 One of the highlights of the event is a speaker panel with executives from Oracle GlassFish and Middleware. This will be your chance to ask tough questions and expect a honest and frank answer from them. If you are attending JavaOne, then you can register for the Community Event and ask the questions in person. However, if are you are not attending the conference then we would still like give you an option to ask your questions. Are you excited, nervous, curious, confused, thrilled about the future of Java EE, GlassFish, and in general about middleware at Oracle ? This is your chance to leave a comment on this blog with your question. We'll pick some of the questions and ask them for you. And then post a response after the conference. Have you registered for JavaOne ?

    Read the article

  • Is knowledge of hacking mechanisms required for an MMO?

    - by Gabe
    Say I was planning on, in the future (not now! There is alot I need to learn first) looking to participating in a group project that was going to make a massively multiplayer online game (mmo), and my job would be the networking portion. I'm not that familiar with network programming (I've read a very basic book on PHP, MYSQL and I messed around a bit with WAMP). In the course of my studying of PHP and MYSQL, should I look into hacking? Hacking as in port scanning, router hacking, etc. In MMOs people are always trying to cheat, bots and such, but the worst scenario would be having someone hack the databases. This is just my conception of this, I really don't know. I do however understand networking fairly well, like subnetting/ports/IP's (local/global)/etc. In your professional opinion, (If you understand the topic, enlighten me) Should I learn about these things in order to counter the possibility of this happening? Also, out of the things I mentioned (port scanning, router hacking) Is there anything else that pertains to hacking that I should look into? I'm not too familiar with the malicious/Security aspects of Networking. And a note: I'm not some kid trying to learn how to hack. I just want to learn as much as possible before I go to college, and I really need to know if I need to study this or not.

    Read the article

  • Mobility Card in Bangalore for Transportation

    - by Rekha
    Transport Minister R Ashoka announced Bangalore Metropolitan Transport Corporation (BMTC) services are going to be best in the world soon. BMTC has planned to launch a Mobility Card with which commuters can get rides in BMTC, KSRTC and future Metro Train facilities without buying tickets for each ride. The conductor with have a simple device in which the commuters can swipe their cards to deduct the ticket tarrif for bus or metro rides automatically. This Mobility card can be obtained by paying a fixed amount. This method is time saving and the commuters can be saved from paying the exact change for tickets. Ashoka says the Volvo Vayu Vaira services have internet connectivity and voice announcements of every bus stop names and this has been appreciated by the commuters. With WiFi Connections in Shatabdi Trains soon and Mobility Cards, India is soon to match the services of US Standards. Government officials are keen in implementing these services before the end of this year. Hope all these services are well used and maintained.   This article titled,Mobility Card in Bangalore for Transportation, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • SAP or Navision? Career Path

    - by codebased
    This could be tricky to ask; I may or may not ask this question here but I thought to give it a try. I've been in Software Industry since 2002 and now it has been a time that I'm at Senior level where I normally code, lead and define the architect; giving technical solutions to the management is one of my asset that I've earned during my services. Now it is the time to define the road map for the future, $$$. I am not in favor of Project Management roles. I've been thinking of going through the ERP and my current company does provide me an option to go for Navision/ Microsoft Dynamics. They are currently on 4.0 but they are planning to move for 2009 and also to build one of their own plug-in. Indeed the option is good because Microsoft is trying to accomplish the market for Dynamics products. However, they have less success in Australia. Now, Another option is with SAP where person can go with 200 K $ a year. Where as I'd doubt that if the same kind of growth, financial, is available for Microsoft geek. What is your opinion on Navision or SAP? If I try to completely move to SAP it could be bit challenging as market will consider me a fresher. However the return is quite good. Where in case of Microsoft, I think technology changes so fast that there is a less chance to grow in, within, the same experience; in other word, if any new framework comes in .net then market look for that person who knows this new framework and not .net But in case of SAP, where the base remain same and chances are to grab more money from the market. What would you do if you were me? In stackoverflow - Navision questions are 20+ where in SAP 200+///?? :-)

    Read the article

  • Implementing Light Volume Front Faces

    - by cubrman
    I recently read an article about light indexed deferred rendering from here: http://code.google.com/p/lightindexed-deferredrender/ It explains its ideas in a clear way, but there was one point that I failed to understand. It in fact is one of the most interesting ones, as it explains how to implement transparency with this approach: Typically when rendering light volumes in deferred rendering, only surfaces that intersect the light volume are marked and lit. This is generally accomplished by a “shadow volume like” technique of rendering back faces – incrementing stencil where depth is greater than – then rendering front faces and only accepting when depth is less than and stencil is not zero. By only rendering front faces where depth is less than, all future lookups by fragments in the forward rendering pass will get all possible lights that could hit the fragment. Can anyone explain how exactly you need to render only front faces? Another question is why do you need the front faces at all? Why can't we simply render all the lights and store the ones that overlap at this pixel in a texture? Does this approach serves as a cut-off plane to discard lights blocked by opaque geometry?

    Read the article

  • Partial upgrade on 12.04, how to stop nagging after locking to a working NVIDIA & xorg

    - by alsk
    How to stop the upgrade manager from offering updates and upgrades that potentially would harm my working 2D and 3D graphics? Finally, I got 12.04 working as it should: with nvidia-173 drivers by downgrading xorg and locking the version: On my 32-bit system on Athlon64, with (Albatron) NVIDIA GeForce FX5700XT, locked (/pinned) to xorg 1:7.6-7ubuntu7, xserver-xorg-core 2:11.1-0obuntu10.07, nvidia-173 173.14.35-0ubuntu0.2? An annoying thing left is that every time the updates are checked, I get warning of partial updates, and ambiguous options of "partial update" and "close". Ambiguous in that sense that if I click close, I will get option to update a few packages, which has been OK, while "partial update" would like to update my kernel to 3.2, alter xorg, remove nvidia-173 etc., and update mesa etc. This is not what I call appropriate, after locking XORG and NVIDIA drivers to working ones. One may say according to package management logic it may be correct, but to me as an user it makes little sense. Last Ubuntu that worked without big mess for me was 10.10, hence I will not put 12.10 to my "production" system, until I can be sure it will not trash the system again. P.S. Is there a recommended way to keep NVIDIA GeForce FX working with 3D on Ubuntu... in future?

    Read the article

  • Recommended Method to Watch Amazon Prime using Ubuntu 14.04 LTS

    - by Kurt Sanger
    I realize that Hal is no longer in the Ubuntu Software Center for Ubuntu 14.04 and it is only available from a third party at this time. But I would like to know what Ubuntu's plans are for integrating DRM into Linux? Especially with Amazon's integration into the search tool, one would hope that they would make it easier for their Amazon Prime customers to watch Instant Videos. Is the repository for getting Hal for 13.10 safe for use? What will that break if I install it onto 14.04? Or do we need to find another OS that has DRM built into it? If Hal is okay to add to the OS using a third party repo, then why doesn't Ubuntu Software Center support it too? I imagine that Amazon's contract with the video copyright holders requires that they have some protection on electronically distributed media. I also imagine that getting Amazon to change is much harder than getting a bunch of software engineers to fix Ubuntu. Unless they don't want too. At which point Ubuntu isn't really a complete OS. Very disappointing. In general the ease of use of Ubuntu, the software center, and the large variety of applications was alluring. But breaking DRM wasn't a great idea. Can't wait to see what fails in our next update. Please tell us that there is a plan that is going to work in our future.

    Read the article

  • Is DQS-in-the-cloud on its way?

    - by jamiet
    LinkedIn profiles are always a useful place to find out what's really going on in Microsoft. Today I stumbled upon this little nugget from former SSIS product team member Matt Carroll: March 2012 – December 2012 (10 months)Redmond, WA Took ownership of the SQL 2012 Data Quality Services box product and re-architected and extended it to become a cloud service. Led team and managed product to add dynamic scale, security, multi-tenancy, deployment, logging, monitoring, and telemetry as well as creating new Excel add-in and new ecosystem experience around easily sharing and finding cleansing agents. Personally designed, coded, and unit tested in-memory trigram matching algorithm core to better performance, scale and maintainability. Delivered and supported successful private preview of the new service prior to SQL wide reorganization.  http://www.linkedin.com/profile/view?id=9657184  Sounds as though a Data-Quality-Services-in-the-cloud (which I spoke of as being a useful addition to Microsoft's BI portfolio in my previous blog post Thoughts on Power BI for Office 365 ) might be on its way some time in the future. And what's this SQL wide reorganization? Interesting stuff. @Jamiet  

    Read the article

  • Welcome to the FMW Install and Admin Proactive Team Blog

    - by Daniel Mortimer
    IntroductionWelcome to the Fusion Middleware Install and Administration Proactive Support blog.  This is our first post, so let's begin by introducing ourselves and our mission. Who We AreWe are a small team of support engineers based in Europe.  Our expertise covers all matters related to the installation and administration of Oracle Application Server 10g, Oracle Fusion Middleware 11g and future versions to come. We particularly focus on core components such as the Installers and Configuration Wizards Web Tier ( Oracle HTTP Server ) OPMN Enterprise Manager Console for Application Server as well as general questions / problems relating to patching, maintenance and architecture. Our Mission Improve the customer experience Enable customers to avoid / prevent issues when working with our products Enable faster resolution of problems when they occur Our Activities Enhancement and maintenance of our knowledge base In particular, develop and maintain special content such as the Fusion Middleware Information Centers and Lifecycle Support Advisors Seek continuous improvement of the product documentation Contribute to the Fusion Middleware Support News Moderation of the "Oracle Application Server" support community Participate in the Support Advisor Webcast program Involved in the Lifecycle of diagnostic tools such as RDA and OCM User Acceptance Testing Logging of enhancements and health check ideas Provide feedback to product management / development Logging of product bugs and enhancements Suggest improvements that could be made to web sites like OTN Promote new support documents, tools via channels such as Newsletter and Social Media We hope that this blog will be a two-way communication as we are interested in feedback on what we can improve. Many suggestions we can act on immediately while others may take more time, but all of them will be acknowledged and followed up.Thank you for your time and we look forward to both informing and working with you.Postscript: Many links you will find in our blog entries will require a login to My Oracle Support. For readers who do not have a login, please accept our apologies - when and where possible we will endeavour to ensure the links will supplement rather than replace wording in the blog entries.

    Read the article

  • Observable Adapter

    - by Roman Schindlauer
    .NET 4.0 introduced a pair of interfaces, IObservable<T> and IObserver<T>, supporting subscriptions to and notifications for push-based sequences. In combination with Reactive Extensions (Rx), these interfaces provide a convenient and uniform way of describing event sources and sinks in .NET. The StreamInsight CTP refresh in November 2009 included an Observable adapter supporting “reactive” event inputs and outputs.   While we continue to believe it enables an important programming model, the Observable adapter was not included in the final (RTM) release of Microsoft StreamInsight 1.0. The release takes a dependency on .NET 3.5 but for timing reasons could not take a dependency on .NET 4.0. Shipping a separate copy of the observable interfaces in StreamInsight – as we did in the CTP refresh – was not a viable option in the RTM release.   Within the next months, we will be shipping another preview of the Observable adapter that targets .NET 4.0. We look forward to gathering your feedback on the new adapter design! We plan to include the Observable adapter implementation into the product in a future release of Microsoft StreamInsight. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Oracle at Information Security and Risk Management Conference (ISACA Conferences)

    - by Tanu Sood
    The North America Information Security and Risk Management (ISRM) Conference hosted by ISACA will be held this year from November 14 - 16 in Las Vegas, Nevada and Oracle is a platinum sponsor. The ISRM / IT GRC event is not only designed to meet the exact needs of information security, governance, compliance and risk management professionals like you, but also gives you the tools you need to solve the issues you currently face. The event builds on and includes the key elements of information security, governance, compliance and risk management practices, and offers a fresh perspective on current and future trends. As a Platinum Sponsor Oracle will not only have an opportunity to demonstrate but talk through our strategic roadmap and support to ensure all organizations understand our key role within the industry to ensure corporate data and information remains safe. Join us at the Lunch and Learn to learn more about the latest advances in Oracle Identity Management. Lunch and Learn Session: Trends in Identity Management Speaker: Mike Neuenschwander, Senior Product Development Director, Oracle Identity Management As enterprises embrace mobile and social applications, security and audit have moved into the foreground. The way we work and connect with our customers is changing dramatically and this means, re-thinking how we secure the interaction and enable the experience. Work is an activity not a place - mobile access enables employees to work from any device anywhere and anytime. Organizations are utilizing "flash teams" - instead of a dedicated group to solve problems, organizations utilize more cross-functional teams. Work is now social - email collaboration will be replaced by dynamic social media style interaction. In this session, we will examine these three secular trends and discuss how organizations can secure the work experience and adapt audit controls to address the "new work order". We also recommend you bookmark the following session: T1 Session 301: Gone in 60 Seconds: Mitigating Database Security Risk Friday, November 16, 8:30 am – 9:30 am And, do be sure to stop by our booth, # 100 & #102, to not only network with our Product Development Team, but also get an onsite demonstration of Oracle Security Solutions. See you there? ISRM /  IT GRC November 14 – 16, 2012 Mirage Casino-Hotel 3400 Las Vegas Boulevard South Las Vegas, NV, 89109

    Read the article

  • Creating a layer of abstraction over the ORM layer

    - by Daok
    I believe that if you have your repositories use an ORM that it's already enough abstracted from the database. However, where I am working now, someone believe that we should have a layer that abstract the ORM in case that we would like to change the ORM later. Is it really necessary or it's simply a lot of over head to create a layer that will work on many ORM? Edit Just to give more detail: We have POCO class and Entity Class that are mapped with AutoMapper. Entity class are used by the Repository layer. The repository layer then use the additional layer of abstraction to communicate with Entity Framework. The business layer has in no way a direct access to Entity Framework. Even without the additional layer of abstraction over the ORM, this one need to use the service layer that user the repository layer. In both case, the business layer is totally separated from the ORM. The main argument is to be able to change ORM in the future. Since it's really localized inside the Repository layer, to me, it's already well separated and I do not see why an additional layer of abstraction is required to have a "quality" code.

    Read the article

  • What is a Data Warehouse?

    Typically Data Warehouses are considered to be non-volatile in comparison to traditional databasesdue to the fact that data within the warehouse does not change that often.  In addition, Data Warehouses typically represent data through the use of Multidimensional Conceptual Views that allow data to be extracted based on the view and the current position within the view. Common Data Warehouse Traits Relatively Non-volatile Data Supports Data Extraction and Analysis Optimized for Data Retrieval and Analysis Multidimensional Views of Data Flexible Reporting Multi User Support Generic Dimensionality Transparent Accessible Unlimited Dimensions of Data Unlimited Aggregation levels of Data Normally, Data Warehouses are much larger then there traditional database counterparts due to the fact that they store the basis data along with derived data via Multidimensional Conceptual Views. As companies store larger and larger amounts of data, they will need a way to effectively and accurately extract analysis information that can be used to aide in formulating current and future business decisions. This process can be done currently through data mining within a Data Warehouse. Data Warehouses provide access to data derived through complex analysis, knowledge discovery and decision making. Secondly, they support the demands for high performance in regards to analyzing an organization’s existing and current data. Data Warehouses provide support for an organization’s data and acquired business knowledge.  Within a Data Warehouse multiple types of operations/sub systems are supported. Common Data Warehouse Sub Systems Online Analytical Processing (OLAP) Decision –Support Systems (DSS) Online Transaction Processing (OLTP)

    Read the article

  • OpenWorld General Session 2012: Middleware & JavaOne

    - by JuergenKress
    In this general session, listen how developers leverage new innovations in their applications and customers achieve their business innovation goals with Oracle Fusion Middleware. We uploaded the key Fusion Middleware presentations (ppt format) in our SOA Community Workspace OFM OOW2012.pptx BPM Preview of Oracle BPM PS6.ppt and (Oracle Partner confidential) Please visit our SOA Community Workspace (SOA Community membership required). Read our First feedback from our ACE Directors: Guido Schmutz: My presentations at Oracle OpenWorld 2012 Lucas Jellema: OOW 2012 – Larry Ellison’s Keynote Announcements: Exa, Cloud, Database And from Antony Reynolds Many tweets #soacommunity with the latest OOW information have been posted on twitter. The First impressions are posted on our facebook page. Thanks for the excellent Java One Summary from Amis JavaOne 2012: Strategy and Technical Keynote and Dustin JavaOne 2012: JavaOne Technical Keynote. As a summary JavaOne 2012 was a successful event and Java is back alive and more successful than ever before – make the future Java! IDC confirms it in their latest report: Java 2,5 years after the acquisition – IDC report“. As a result, Java made more significant advancements after the Sun acquisition than in the two and half years prior to the acquisition. The Java ecosystem is healthy and remains on a growing trajectory,” WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. BlogTwitterLinkedInMixForumWiki Technorati Tags: OOW,JavaOne,presentations,video,keynote,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Does Unity have existing support for Timelines?

    - by Raven Dreamer
    I am planning the development of a game in Unity3D, and trying to come to terms with what the engine has already provided, and what I must code myself. The game itself is going to be a rhythm game, which means synching audio and graphical events so that they always play when they're supposed to. What I'm looking to avoid is a potential scenario of lag where either the audio or the graphics starts to progress faster than the other. When we discussed this type of coordinating system in my game design class back at university, my professor called this type of design a "Timeline" class. The idea being that you can instantiate one or more of these to progress at different rates, schedule things to happen in the future, and synch up periodic events. However, calling this a "Timeline" class seems to have been limited to my professor himself, as googling for whether a certain API features "Timeline" functionality has been a fruitless endeavor. Is there some more common name for this kind of functionality? Does Unity have any pre-existing methods to coordinate the scheduling of events like this, or is this the kind of thing that needs to be built onto the engine? And if it does, I'd appreciate being pointed towards some tutorials!

    Read the article

  • Handling Deployment to Multiple Environments

    - by JayGee
    How should I handle deploying web applications to multiple servers? Constraints I have a dev, test and prod environment. No build server is available. Developers can't deploy to prod. The people that do deploy to prod copy files from test to prod. They don't have VS installed. Currently The way it's handled is using web.config transform. However, to deploy to prod involves putting prod code on the test server where it's copied over. Problem Sometimes simple mistakes are made, such as forgetting to change test back to the right environment after deployment. Or the test config gets moved to prod instead of the prod config. Solution So the question is, what is the best way to prevent mistakes from happening? My first thought is let the app determine which server it's on at runtime and use the appropriate settings/connection strings/etc... However, the server names could change in the not too distant future. So if multiple apps are hard coded, that would mean updating all of them. The easiest way to handle that situation would be to place a DLL in the GAC that determines the environment. Are there any drawbacks or possible complications that this would cause? Or is there a better solution to the problem than this?

    Read the article

  • Why is it that software is still easily pirated today?

    - by mohabitar
    I've always been curious about this. Now I wouldn't call myself a programmer yet, but I'm learning, so maybe the answer to this is obvious. It just seems a little hard to believe that with all of our technological advances and the billions of dollars spent on engineering the most unbelievable and mind-blowing software, we still have no other means of protecting against piracy then a "serial number/activation key". I'm sure a ton of money, maybe even billions, went into creating Windows 7 or Office and even Snow Leopard, yet I can get it for free in less than 20 minutes. Same for all of Adobe's products, which are probably the easiest. I guess my question is: can there exist a fool proof and hack proof method of protecting your software against piracy? If not realistically, how about theoretically possible? Or no matter what mechanisms these companies deploy, hackers can always find a way around it? EDIT: So apparently, the answer is no. There's pretty much no way. And so I'm sure these big companies have realized this as well. Should they adopt another sales model rather than charging a crapload for their software (I know its justified and they put a lot of hard work into their software, but its still a lot of money). Are there any alternative solutions that will benefit both the company and the user (i.e. if you purchase our product, we'll apply $X dollars to your account that will apply to future purchases from our company)?

    Read the article

  • User Acceptance Testing Defect Classification when developing for an outside client

    - by DannyC
    I am involved in a large development project in which we (a very small start up) are developing for an outside client (a very large company). We recently received their first output from UAT testing of a fairly small iteration, which listed 12 'defects', triaged into three categories : Low, Medium and High. The issue we have is around whether everything in this list should be recorded as a 'defect' - some of the issues they found would be better described as refinements, or even 'nice-to-haves', and some we think are not defects at all. They client's QA lead says that it is standard for them to label every issues they identify as a defect, however, we are a bit uncomfortable about this. Whilst the relationship is good, we don't see a huge problem with this, but we are concerned that, if the relationship suffers in the future, these lists of 'defects' could prove costly for us. We don't want to come across as being difficult, or taking things too personally here, and we are happy to make all of the changes identified, however we are a bit concerned especially as there is a uneven power balance at play in our relationship. Are we being paranoid here? Or could we be setting ourselves up for problems down the line by agreeing to this classification?

    Read the article

  • unit/integration testing web service proxy client

    - by cori
    I'm rewriting a PHP client/proxy library that provides an interface to a SOAP-based .Net webservice, and in the process I want to add some unit and integration tests so future modifications are less risky. The work the library I'm working on performs is to marshall the calls to the web service and do a little reorganizing of the responses to present a slightly more -object-oriented interface to the underlying service. Since this library is little else than a thin layer on top of web service calls, my basic assumption is that I'll really be writing integration tests more than unit tests - for example, I don't see any reason to mock away the web service - the work that's performed by the code I'm working on is very light; it's almost passing the response from the service right back to its consumer. Most of the calls are basic CRUD operations: CreateRole(), CreateUser(), DeleteUser(), FindUser(), &ct. I'll be starting from a known database state - the system I'm using for these tests is isolated for testing purposes, so the results will be more or less predictable. My question is this: is it natural to use web service calls to confirm the results of operations within the tests and to reset the state of the application within the scope of each test? Here's an example: One test might be createUserReturnsValidUserId() and might go like this: public function createUserReturnsValidUserId() { // we're assuming a global connection to the service $newUserId = $client->CreateUser("user1"); assertNotNull($newUserId); assertNotNull($client->FindUser($newUserId); $client->deleteUser($newUserId); } So I'm creating a user, making sure I get an ID back and that it represents a user in the system, and then cleaning up after myself (so that later tests don't rely on the success or failure of this test w/r/t the number of users in the system, for example). However this still seems pretty fragile - lots of dependencies and opportunities for tests to fail and effect the results of later tests, which I definitely want to avoid. Am I missing some options of ways to decouple these tests from the system under test, or is this really the best I can do? I think this is a fairly general unit/integration testing question, but if it matters I'm using PHPUnit for the testing framework.

    Read the article

  • Advantages of Singleton Class over Static Class?

    Point 1) Singleton We can get the object of singleton and then pass to other methods. Static Class We can not pass static class to other methods as we pass objects Point 2) Singleton In future, it is easy to change the logic of of creating objects to some pooling mechanism. Static Class Very difficult to implement some pooling logic in case of static class. We would need to make that class as non-static and then make all the methods non-static methods, So entire your code needs to be changed. Point3:) Singleton Can Singletone class be inherited to subclass? Singleton class does not say any restriction of Inheritence. So we should be able to do this as long as subclass is also inheritence.There's nothing fundamentally wrong with subclassing a class that is intended to be a singleton. There are many reasons you might want to do it. and there are many ways to accomplish it. It depends on language you use. Static Class We can not inherit Static class to another Static class in C#. Think about it this way: you access static members via type name, like this: MyStaticType.MyStaticMember(); Were you to inherit from that class, you would have to access it via the new type name: MyNewType.MyStaticMember(); Thus, the new item bears no relationships to the original when used in code. There would be no way to take advantage of any inheritance relationship for things like polymorphism. span.fullpost {display:none;}

    Read the article

  • Managed Service Architectures Part I

    - by barryoreilly
    Instead of thinking about service oriented architecture, a concept that is continually defined, redefined, abused and mistreated, perhaps it is time to drop the acronym and consider what we actually need to get the job done.   ‘Pure’ SOA involves the modeling of an organisation’s processes, the so called ‘Top Down’ approach, followed by the implementation of these processes as services.     Another approach, more commonly seen in the wild, is the bottom up approach. This usually involves services that simply start popping up in the organization, and SOA in this case is often just an attempt to rein in these services. Such projects, although described as SOA projects for a variety of reasons, have clearly little relation to process driven architecture. Much has been written about these two approaches, with many deciding that a hybrid of both methods is needed to succeed with SOA.   These hybrid methods are a sensible compromise, but one gets the feeling that there is too much focus on ‘Succeeding with SOA’. Organisations who focus too much on bottom up development, or who waste too much time and money on top down approaches that don’t produce results, are often recommended to attempt an ‘agile’(Erl) or ‘middle-out’ (Microsoft) approach in order to succeed with SOA.  The problem with recommending this approach is that, in most cases, succeeding with SOA isn’t the aim of the project. If a project is started with the simple aim of ‘Succeeding with SOA’ then the reasons for the projects existence probably need to be questioned.   There are a number of things we can be sure of: ·         An organisation will have a number of disparate IT systems ·         Some of these systems will have redundant data and functionality ·         Integration will give considerable ROI ·         Integration will already be under way. ·         Services will already exist in the organisation ·         These services will be inconsistent in their implementation and in their governance   So there are three goals here: 1.       Alignment between the business and IT 2.     Integration of disparate systems 3.     Management of services.   2 and 3 are going to happen,  in fact they must happen if any degree of return is expected from the IT department. Ignoring 1 is considered a typical mistake in SOA implementations, as it ignores the business implications. However, the business implication of this approach is the money saved in more efficient IT processes. 2 and 3 are ongoing, and they will continue happening, even if a large project to produce a SOA metamodel is started. The result will then be an unstructured cackle of services, and a metamodel that is already going out of date. So we get stuck in and rebuild our services so that they match the metamodel, with the far reaching consequences that this will have on all our LOB systems are current. Lets imagine that this actually works ( how often do we rip and replace working software because it doesn't fit a certain pattern? Never -that's the point of integration), we will now be working with a metamodel that is out of date, and most likely incomplete if the organisation is large.      Accepting that an object can have more than one model over time, with perhaps more than one model being  at any given time will help us realise the limitations of the top down model. It is entirely normal , and perhaps necessary, for an organisation to be able to view an entity from different perspectives.   So, instead of trying to constantly force these goals in a straight line, why not let them happen in parallel, and manage the changes in each layer.     If  company A has chosen to model their business processes and create a business architecture, there will be a reason behind this. Often the aim is to make the business more flexible and able to cope with change, through alignment between the business and the IT department.   If company B’s IT department recognizes the problem of wild services springing up everywhere, and decides to do something about it, by designing a platform and processes for the introduction of services, is this not a valid approach?   With the hybrid approach, it is recommended that company A begin deploying services as quickly as possible. Based on models that are clearly incomplete, and which will therefore change rapidly and often in the near future. Natural business evolution will also mean that the models can be guaranteed to change in the not so near future. To ‘Succeed with SOA’ Company B needs to go back to the drawing board and start modeling processes and objects. So, in effect, we are telling business analysts to start developing code based on a model they are unsure of, and telling programmers to ignore the obvious and growing problems in their IT department and start drawing lines and boxes.     Could the problem be that there are two different problem domains? And the whole concept of SOA as it being described by clever salespeople today creates an example of oft dreaded ‘tight coupling’ between these two domains?   Could it be that we have taken two large problem areas, and bundled the solution together in order to create a magic bullet? And then convinced ourselves that the bullet actually exists?   Company A wants to have a closer relationship between the business and its IT department, in order to become a more flexible organization. Company B wants to decrease the maintenance costs of its IT infrastructure. If both companies focus on succeeding with SOA, then they aren’t focusing on their actual goals.   If Company A starts building services from incomplete models, without a gameplan, they will end up in the same situation as company B, with wild services. If company B focuses on modeling, they could easily end up with the same problems as company A.   Now we have two companies, who a short while ago had one problem each, that now have two problems each. This has happened because of a focus on ‘Succeeding with SOA’, rather than solving the problem at hand.   This is not to suggest that the two problem domains are unrelated, a strategy that encompasses both will obviously be good for the organization. But only if the organization realizes this and can develop such a strategy. This strategy cannot be bought in a box.       Anyone who has worked with SOA for a while will be used to analyzing the solutions to a problem and judging the solution’s level of coupling. If we have two applications that each perform separate functions, but need to communicate with each other, we create a integration layer between them, perhaps with a service, but we do all we can to reduce the dependency between the two systems. Using the same approach, we can separate the modeling (business architecture) and the service hosting (technical architecture).     The business architecture describes the processes and business objects in the business domain.   The technical architecture describes the hosting and management and implementation of services.   The glue that binds these together, the integration layer in our analogy, is the service contract, where the operations map the processes to their technical implementation, and the messages map business concepts to software objects in the implementation.   If we reduce the coupling between these layers, we should be able to allow developers to develop services, and business analysts to develop models, without the changes rippling through from one side to the other.   This would allow company A to carry on modeling, and company B to develop a service platform, each achieving their intended goal, without necessarily creating the problems seen in pure top down or bottom up approaches. Company B could then at a later date map their service infrastructure to a unified model, and company A could carry on modeling, insulating deployed services from changes in the ongoing modeling.   How do we do this?  The concept of service virtualization has been around for a while, and is instantly realizable in Microsoft’s Managed Services Engine. Here we can create a layer of virtual services, which represent the business analyst’s view, presenting uniform contracts to the outside world. These services can then transform and route messages to the actual service implementations. I like to think of the virtual services with their beautifully modeled interfaces as ‘SOA services’, and the implementations as simple integration ‘adapter’ services providing an interface to a technical implementation. The Managed Services Engine also provides policy based control over services, regardless of where they are deployed, simplifying handling of security, logging, exception handling etc.   This solves a big problem. The pressure to deliver services quickly is always there in projects. It is very important to quickly show value when implementing service architectures. There is also pressure to deliver quality, and you can’t easily do both at the same time. This approach allows quick delivery with quality increasing over time, allowing modeling and service development to occur in parallel and independent of each other. The link between business modeling and service implementation is not one that is obvious to many organizations, and requires a certain maturity to realize and drive forward. It is also completely possible that a company can benefit from one without the other, even if this approach is frowned upon today, there are many companies doing so and seeing ROI.   Of course there are disadvantages to this. The biggest one being the transformations necessary between the virtual interfaces and the service implementations. Bad choices in developing the services in the service implementation could mean that it is impossible to map the modeled processes to the implementation with redevelopment of the service. In many cases the architect will not have a choice here anyway, as proprietary systems are often delivered with predeveloped services. The alternative is to wait until the model is finished and then build the service according the model. However, if that approach worked we wouldn’t be having this discussion! And even when it does work, natural business evolution will mean that the two concepts (model and implementation) will immediately start to drift away from each other, so coupling them tightly together so that they are forever bound to the model that only applies at the time of the modeling work will not really achieve a great deal. Architecture is all about trade offs, and here a choice has to be made. The choice is between something will initially be of low quality but will work, or something that may well be impossible to achieve in most situations.         In conclusion, top-down is a natural approach for business analysts, and bottom-up  is a natural approach for developers. Instead of trying to force something on both that neither want, and which has not shown itself to be successful,  why not let them get on with their jobs, and let an enterprise architect coordinate the processes?

    Read the article

  • How to design highly scalable web services in Java?

    - by Kshitiz Sharma
    I am creating some Web Services that would have 2000 concurrent users. The services are offered for free and are hence expected to get a large user base. In the future it may be required to scale up to 50,000 users. There are already a few other questions that address the issue like - Building highly scalable web services However my requirements differ from the question above. For example - My application does not have a user interface, so images, CSS, javascript are not an issue. It is in Java so suggestions like using HipHop to translate PHP to native code are useless. Hence I decided to ask my question separately. This is my project setup - Rest based Web services using Apache CXF Hibernate 3.0 (With relevant optimizations like lazy loading and custom HQL for tune up) Tomcat 6.0 MySql 5.5 My questions are - Are there alternatives to Mysql that offer better performance for what I'm trying to do? What are some general things to abide by in order to scale a Java based web application? I am thinking of putting my Application in two tomcat instances with httpd redirecting the request to appropriate tomcat on basis of load. Is this the right approach? Separate tomcat instances can help but then database becomes the bottleneck since both applications access the same database? I am a programmer not a Db Admin, how difficult would it be to cluster a Mysql database (or, to cluster whatever database offered as an alternative to 1)? How effective are caching solutions like EHCache? Any other general best practices? Some clarifications - Could you partition the data? Yes we could but we're trying to avoid it. We need to run a lot of data mining algorithms and the design would evolve over time so we can't be sure what lines of partition should be there.

    Read the article

  • Is there a SUPPORTED way to run .NET 4.0 applications natively on a Mac?

    - by Dan
    What, if any, are the Microsoft supported options for running C#/.NET 4.0 code natively on the Mac? Yes, I know about Mono, but among other things, it lags Microsoft. And Silverlight only works in a web browser. A VMWare-type solution won't cut it either. Here's the subjective part (and might get this closed): is there any semi-authoritative answer to why Microsoft just doesn't support .NET on the Mac itself? It would seem like they could Silverlight and/or buy Mono and quickly be there. No need for native Visual Studio; cross-compiling and remote debugging is fine. The reason is that where I work there is a growing amount of Uncertainty about the future which is causing a lot more development to be done in C++ instead of C#; brand new projects are chosing to use C++. Nobody wants to tell management 18–24 months from now "sorry" should the Mac (or iPad) become a requirement. C++ is seen as the safer option, even if it (arguably) means a loss in productivity today.

    Read the article

< Previous Page | 102 103 104 105 106 107 108 109 110 111 112 113  | Next Page >