Search Results

Search found 37989 results on 1520 pages for 'software as a service'.

Page 5/1520 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • EM12c Release 4: Database as a Service Enhancements

    - by Adeesh Fulay
    Oracle Enterprise Manager 12.1.0.4 (or simply put EM12c R4) is the latest update to the product. As previous versions, this release provides tons of enhancements and bug fixes, attributing to improved stability and quality. One of the areas that is most exciting and has seen tremendous growth in the last few years is that of Database as a Service. EM12c R4 provides a significant update to Database as a Service. The key themes are: Comprehensive Database Service Catalog (includes single instance, RAC, and Data Guard) Additional Storage Options for Snap Clone (includes support for Database feature CloneDB) Improved Rapid Start Kits Extensible Metering and Chargeback Miscellaneous Enhancements 1. Comprehensive Database Service Catalog Before we get deep into implementation of a service catalog, lets first understand what it is and what benefits it provides. Per ITIL, a service catalog is an exhaustive list of IT services that an organization provides or offers to its employees or customers. Service catalogs have been widely popular in the space of cloud computing, primarily as the medium to provide standardized and pre-approved service definitions. There is already some good collateral out there that talks about Oracle database service catalogs. The two whitepapers i recommend reading are: Service Catalogs: Defining Standardized Database Service High Availability Best Practices for Database Consolidation: The Foundation for Database as a Service [Oracle MAA] EM12c comes with an out-of-the-box service catalog and self service portal since release 1. For the customers, it provides the following benefits: Present a collection of standardized database service definitions, Define standardized pools of hardware and software for provisioning, Role based access to cater to different class of users, Automated procedures to provision the predefined database definitions, Setup chargeback plans based on service tiers and database configuration sizes, etc Starting Release 4, the scope of services offered via the service catalog has been expanded to include databases with varying levels of availability - Single Instance (SI) or Real Application Clusters (RAC) databases with multiple data guard based standby databases. Some salient points of the data guard integration: Standby pools can now be defined across different datacenters or within the same datacenter as the primary (this helps in modelling the concept of near and far DR sites) The standby databases can be single instance, RAC, or RAC One Node databases Multiple standby databases can be provisioned, where the maximum limit is determined by the version of database software The standby databases can be in either mount or read only (requires active data guard option) mode All database versions 10g to 12c supported (as certified with EM 12c) All 3 protection modes can be used - Maximum availability, performance, security Log apply can be set to sync or async along with the required apply lag The different service levels or service tiers are popularly represented using metals - Platinum, Gold, Silver, Bronze, and so on. The Oracle MAA whitepaper (referenced above) calls out the various service tiers as defined by Oracle's best practices, but customers can choose any logical combinations from the table below:  Primary  Standby [1 or more]  EM 12cR4  SI  -  SI  SI  RAC -  RAC SI  RAC RAC  RON -  RON RON where RON = RAC One Node is supported via custom post-scripts in the service template A sample service catalog would look like the image below. Here we have defined 4 service levels, which have been deployed across 2 data centers, and have 3 standardized sizes. Again, it is important to note that this is just an example to get the creative juices flowing. I imagine each customer would come up with their own catalog based on the application requirements, their RTO/RPO goals, and the product licenses they own. In the screenwatch titled 'Build Service Catalog using EM12c DBaaS', I walk through the complete steps required to setup this sample service catalog in EM12c. 2. Additional Storage Options for Snap Clone In my previous blog posts, i have described the snap clone feature in detail. Essentially, it provides a storage agnostic, self service, rapid, and space efficient approach to solving your data cloning problems. The net benefit is that you get incredible amounts of storage savings (on average 90%) all while cloning databases in a matter of minutes. Space and Time, two things enterprises would love to save on. This feature has been designed with the goal of providing data cloning capabilities while protecting your existing investments in server, storage, and software. With this in mind, we have pursued with the dual solution approach of Hardware and Software. In the hardware approach, we connect directly to your storage appliances and perform all low level actions required to rapidly clone your databases. While in the software approach, we use an intermediate software layer to talk to any storage vendor or any storage configuration to perform the same low level actions. Thus delivering the benefits of database thin cloning, without requiring you to drastically changing the infrastructure or IT's operating style. In release 4, we expand the scope of options supported by snap clone with the addition of database CloneDB. While CloneDB is not a new feature, it was first introduced in 11.2.0.2 patchset, it has over the years become more stable and mature. CloneDB leverages a combination of Direct NFS (or dNFS) feature of the database, RMAN image copies, sparse files, and copy-on-write technology to create thin clones of databases from existing backups in a matter of minutes. It essentially has all the traits that we want to present to our customers via the snap clone feature. For more information on cloneDB, i highly recommend reading the following sources: Blog by Tim Hall: Direct NFS (DNFS) CloneDB in Oracle Database 11g Release 2 Oracle OpenWorld Presentation by Cern: Efficient Database Cloning using Direct NFS and CloneDB The advantages of the new CloneDB integration with EM12c Snap Clone are: Space and time savings Ease of setup - no additional software is required other than the Oracle database binary Works on all platforms Reduce the dependence on storage administrators Cloning process fully orchestrated by EM12c, and delivered to developers/DBAs/QA Testers via the self service portal Uses dNFS to delivers better performance, availability, and scalability over kernel NFS Complete lifecycle of the clones managed by EM12c - performance, configuration, etc 3. Improved Rapid Start Kits DBaaS deployments tend to be complex and its setup requires a series of steps. These steps are typically performed across different users and different UIs. The Rapid Start Kit provides a single command solution to setup Database as a Service (DBaaS) and Pluggable Database as a Service (PDBaaS). One command creates all the Cloud artifacts like Roles, Administrators, Credentials, Database Profiles, PaaS Infrastructure Zone, Database Pools and Service Templates. Once the Rapid Start Kit has been successfully executed, requests can be made to provision databases and PDBs from the self service portal. Rapid start kit can create complex topologies involving multiple zones, pools and service templates. It also supports standby databases and use of RMAN image backups. The Rapid Start Kit in reality is a simple emcli script which takes a bunch of xml files as input and executes the complete automation in a matter of seconds. On a full rack Exadata, it took only 40 seconds to setup PDBaaS end-to-end. This kit works for both Oracle's engineered systems like Exadata, SuperCluster, etc and also on commodity hardware. One can draw parallel to the Exadata One Command script, which again takes a bunch of inputs from the administrators and then runs a simple script that configures everything from network to provisioning the DB software. Steps to use the kit: The kit can be found under the SSA plug-in directory on the OMS: EM_BASE/oracle/MW/plugins/oracle.sysman.ssa.oms.plugin_12.1.0.8.0/dbaas/setup It can be run from this default location or from any server which has emcli client installed For most scenarios, you would use the script dbaas/setup/database_cloud_setup.py For Exadata, special integration is provided to reduce the number of inputs even further. The script to use for this scenario would be dbaas/setup/exadata_cloud_setup.py The database_cloud_setup.py script takes two inputs: Cloud boundary xml: This file defines the cloud topology in terms of the zones and pools along with host names, oracle home locations or container database names that would be used as infrastructure for provisioning database services. This file is optional in case of Exadata, as the boundary is well know via the Exadata system target available in EM. Input xml: This file captures inputs for users, roles, profiles, service templates, etc. Essentially, all inputs required to define the DB services and other settings of the self service portal. Once all the xml files have been prepared, invoke the script as follows for PDBaaS: emcli @database_cloud_setup.py -pdbaas -cloud_boundary=/tmp/my_boundary.xml -cloud_input=/tmp/pdb_inputs.xml          The script will prompt for passwords a few times for key users like sysman, cloud admin, SSA admin, etc. Once complete, you can simply log into EM as the self service user and request for databases from the portal. More information available in the Rapid Start Kit chapter in Cloud Administration Guide.  4. Extensible Metering and Chargeback  Last but not the least, Metering and Chargeback in release 4 has been made extensible in all possible regards. The new extensibility features allow customer, partners, system integrators, etc to : Extend chargeback to any target type managed in EM Promote any metric in EM as a chargeback entity Extend list of charge items via metric or configuration extensions Model abstract entities like no. of backup requests, job executions, support requests, etc  A slew of emcli verbs have also been added that allows administrators to create, edit, delete, import/export charge plans, and assign cost centers all via the command line. More information available in the Chargeback API chapter in Cloud Administration Guide. 5. Miscellaneous Enhancements There are other miscellaneous, yet important, enhancements that are worth a mention. These mostly have been asked by customers like you. These are: Custom naming of DB Services Self service users can provide custom names for DB SID, DB service, schemas, and tablespaces Every custom name is validated for uniqueness in EM 'Create like' of Service Templates Now creating variants of a service template is only a click away. This would be vital when you publish service templates to represent different database sizes or service levels. Profile viewer View the details of a profile like datafile, control files, snapshot ids, export/import files, etc prior to its selection in the service template Cleanup automation - for failed and successful requests Single emcli command to cleanup all remnant artifacts of a failed request Cleanup can be performed on a per request bases or by the entire pool As an extension, you can also delete successful requests Improved delete user workflow Allows administrators to reassign cloud resources to another user or delete all of them Support for multiple tablespaces for schema as a service In addition to multiple schemas, user can also specify multiple tablespaces per request I hope this was a good introduction to the new Database as a Service enhancements in EM12c R4. I encourage you to explore many of these new and existing features and give us feedback. Good luck! References: Cloud Management Page on OTN Cloud Administration Guide [Documentation] -- Adeesh Fulay (@adeeshf)

    Read the article

  • Updating an ADF Web Service Data Control When Service Structure or Location Change

    - by Shay Shmeltzer
    The web service data control in Oracle ADF gives you a simplified approach to consuming services in ADF applications, and now with ADF Mobile the usage of this service seems to be growing. A frequent question we get is what happens if the service that I'm consuming changes - how do I update my data control? Well, first we should mention that if you do a good design of your application before you actually code - then things like Web service method signature shouldn't change. The signature is the contract between the publisher and the consumer, and contracts shouldn't be broken. But in reality things do change during development stages, so here is how you can update both method signatures and service location with the Web service data control: After watching this video you might be tempted to not copy the WSDLs to your project - which lets you use the right click update on a data control. However there is a reason why the copy is on by default, it reduces network traffic when you are actually running your application since ADF doesn't need to go to the server to find out the service structure. So for runtime performance, you probably should keep the WSDL local.  I encourage you to further look into both the connections.xml file where your service location is saved, and the datacontrols.dcx file where its definition is kept to get an even deeper understanding of how ADF works underneath the declarative layers.

    Read the article

  • Configuring service restart with 'restart service after' parameter

    - by Tim Brigham
    It appears that sc.exe isn't capable of setting the 'restart service after' parameter and powershell isn't capable of setting up service restarts at all. My intended configuration is failure1/restart failure2/restart failure3/nothing with a five minute counter between each restart. The five minute timer is extremely important. Is there anything else I can look at other than some registry hackery configure this?

    Read the article

  • Recommended service account setup for MS SQL Server 2005/2008

    - by boxerbucks
    We have a number of MS SQL servers in our environment running either SQL Server 2005 standard/enterprise or SQL server 2008 enterprise. Currently the SQL services are running as local service or network service and the MS recommended best practice is to run as a domain account which is what we are trying to move towards. Is the best practice with regards to domain accounts to have a separate domain account per service per server? So if we have 4 SQL services we want to run per server and we have 50 servers, we would create 50 * 4 = 200 accounts in AD? This seems excessive to me and I was wondering if anyone has any real experience with this type of setup and it's management.

    Read the article

  • Evolución de software, los nuevos desafíos de la actual industria de software

    Evolución de software, los nuevos desafíos de la actual industria de software En este programa presentaremos una visión general de las oportunidades tecnológicas que favorecen la generación de emprendimientos regionales desde el equipo de relaciones para desarrolladores de la región de sur de Latinoamérica. Trataremos escenarios tecnológicos y principalmente el impacto en la evolución de software y el modelo de computación en la nube, finalmente analizaremos las oportunidades más importantes junto a diversos referentes regionales. Tecnología, desafíos, opiniones y todo el ecosistema representado por la pasión y el talento regional. From: GoogleDevelopers Views: 0 0 ratings Time: 02:00:00 More in Education

    Read the article

  • Best Method For Evaluating Existing Software or New Software

    How many of us have been faced with having to decide on an off-the-self or a custom built component, application, or solution to integrate in to an existing system or to be the core foundation of a new system? What is the best method for evaluating existing software or new software still in the design phase? One of the industry preferred methodologies to use is the Active Reviews for Intermediate Designs (ARID) evaluation process.  ARID is a hybrid mixture of the Active Design Review (ADR) methodology and the Architectural Tradeoff Analysis Method (ATAM). So what is ARID? ARD’s main goal is to ensure quality, detailed designs in software. One way in which it does this is by empowering reviewers by assigning generic open ended survey questions. This approach attempts to remove the possibility for allowing the standard answers such as “Yes” or “No”. The ADR process ignores the “Yes”/”No” questions due to the fact that they can be leading based on how the question is asked. Additionally these questions tend to receive less thought in comparison to more open ended questions. Common Active Design Review Questions What possible exceptions can occur in this component, application, or solution? How should exceptions be handled in this component, application, or solution? Where should exceptions be handled in this component, application, or solution? How should the component, application, or solution flow based on the design? What is the maximum execution time for every component, application, or solution? What environments can this component, application, or solution? What data dependencies does this component, application, or solution have? What kind of data does this component, application, or solution require? Ok, now I know what ARID is, how can I apply? Let’s imagine that your organization is going to purchase an off-the-shelf (OTS) solution for its customer-relationship management software. What process would we use to ensure that the correct purchase is made? If we use ARID, then we will have a series of 9 steps broken up by 2 phases in order to ensure that the correct OTS solution is purchases. Phase 1 Identify the Reviewers Prepare the Design Briefing Prepare the Seed Scenarios Prepare the Materials When identifying reviewers for a design it is preferred that they be pulled from a candidate pool comprised of developers that are going to implement the design. The believe is that developers actually implementing the design will have more a vested interest in ensuring that the design is correct prior to the start of code. Design debriefing consist of a summary of the design, examples of the design solving real world examples put in to use and should be no longer than two hours typically. The primary goal of this briefing is to adequately summarize the design so that the review members could actually implement the design. In the example of purchasing an OTS product I would attempt to review my briefing prior to its distribution with the review facilitator to ensure that nothing was excluded that should have not been. This practice will also allow me to test the length of the briefing to ensure that can be delivered in an appropriate about of time. Seed Scenarios are designed to illustrate conceptualized scenarios when applied with a set of sample data. These scenarios can then be used by the reviewers in the actual evaluation of the software, All materials needed for the evaluation should be prepared ahead of time so that they can be reviewed prior to and during the meeting. Materials Included: Presentation Seed Scenarios Review Agenda Phase 2 Present ARID Present Design Brainstorm and prioritize scenarios Apply scenarios Summarize Prior to the start of any ARID review meeting the Facilitator should define the remaining steps of ARID so that all the participants know exactly what they are doing prior to the start of the review process. Once the ARID rules have been laid out, then the lead designer presents an overview of the design which typically takes about two hours. During this time no questions about the design or rational are allowed to be asked by the review panel as a standard, but they are written down for use latter in the process. After the presentation the list of compiled questions is then summarized and sent back to the lead designer as areas that need to be addressed further. In the example of purchasing an OTS product issues could arise regarding security, the implementation needed or even if this is this the correct product to solve the needed solution. After the Design presentation a brainstorming and prioritize scenarios process begins by reducing the seed scenarios down to just the highest priority scenarios.  These will then be used to test the design for suitability. Once the selected scenarios have been defined the reviewers apply the examples provided in the presentation to the scenarios. The intended output of this process is to provide code or pseudo code that makes use of the examples provided while solving the selected seed scenarios. As a standard rule, the designers of the systems are not allowed to help the review board unless they all become stuck. When this occurs it is documented and along with the reason why the designer needed to help the review panel back on track. Once all of the scenarios have been completed the review facilitator reviews with the group issues that arise during the process. Then the reviewers will be polled as to efficacy of the review experience. References: Clements, Paul., Kazman, Rick., Klien, Mark. (2002). Evaluating Software Architectures: Methods and Case Studies Indianapolis, IN: Addison-Wesley

    Read the article

  • Create software without programming

    - by Hafizul Amri
    Is there any system or software that can let you create a system or software without have to do programming? UPDATE: Answering to Kennethvr question. Software that I mean is like a web-based software that used to create a simple CRUD system such contact management system, and you can choose how data will be displayed. UPDATE I have found PHPRunner software. It can create a simple CRUD system without you need to touch your coding. Take a look at it!

    Read the article

  • Ubuntu Software Center is crashing while trying to install psychonauts

    - by GonzoDark
    I am having a problem installing Tim Schafers epic platform video game Psychonauts trough the Ubuntu Software Center. I have bought the game on http://www.humblebundle.com/ and I have used the new redeem option introduced in this bundle: "redeem your bundle on the Ubuntu Software Center." When I have downloaded approximately 2.1 GB of Psychonauts then Ubuntu Software Center starts to repeatedly crash and pop-up with a new crash report dialog every few seconds (before previous ones are closed and that will crash the computer after a few min, unless I stop the download). I also have a file-size bug, where Ubuntu Software Center tells me that I have downloaded 880,2 MB out of 133,3 MB I use the new Ubuntu 12.04 LTS and Ubuntu Software Center version 5.2.2.2 (©2009-2011 Canonical <--- That is also a bug, should be 2012 I guess) I hope someone can help me.

    Read the article

  • How to update the USB Ubuntu (ISO) with new software

    - by nakata
    I have Peppermint (based on Ubuntu) running of USB. Its bascially an ISO image that loads using Grub4Dos. Problem is that I each time I load Ubuntu - and want to use i.e. teamviewer or Firefox...I have to install it on the running live system. Since its running from a ISO image...a reboot means the software is gone. I can edit/open the ISO image using WinImage (on XP) - would you know how I can add extra software into this ISO image so next time Ubuntu loads...the new software is on there? Is there some special repositary director where I should copy the software files into and recreate the ISO image? The directory structure of this iso image is along the lines of: .disk [BOOT] Casper dists install isolinux pool preseed Appreciate your help with this. The software I am really interested in installing is the plugin for LogMeIn (for which I may need Firefox...thus I need Firefox installation) and Teamviewer. Thanks Nakata

    Read the article

  • Start/stop Windows Service A also Start/stop Windows service B

    - by Sean
    I created two Windows services A and B, and would like to add dependency between them so that I can: Start service A (service B starts automatically) Stop service A (service B stops automatically) However, the command sc config ServiceA depend= ServiceB only works for: Start service A (service B starts automatically) Stop service B (service A stops automatically) Is there any way to make service B stop automatically when I stop service A?

    Read the article

  • Android remote service doesn't call service methods

    - by tarantel
    Hello, I'm developing a GPS tracking software on android. I need IPC to control the service from different activities. So I decide to develop a remote service with AIDL. This wasn't a big problem but now it's always running into the methods of the interface and not into those of my service class. Maybe someone could help me? Here my ADIL file: package test.de.android.tracker interface ITrackingServiceRemote { void startTracking(in long trackId); void stopTracking(); void pauseTracking(); void resumeTracking(in long trackId); long trackingState(); } And the here a short version of my service class: public class TrackingService extends Service implements LocationListener{ private LocationManager mLocationManager; private TrackDb db; private long trackId; private boolean isTracking = false; @Override public void onCreate() { super.onCreate(); mNotificationManager = (NotificationManager) this .getSystemService(NOTIFICATION_SERVICE); mLocationManager = (LocationManager) getSystemService(LOCATION_SERVICE); db = new TrackDb(this.getApplicationContext()); } @Override public void onStart(Intent intent, int startId) { super.onStart(intent, startId); } @Override public void onDestroy(){ //TODO super.onDestroy(); } @Override public IBinder onBind(Intent intent){ return this.mBinder; } private IBinder mBinder = new ITrackingServiceRemote.Stub() { public void startTracking(long trackId) throws RemoteException { TrackingService.this.startTracking(trackId); } public void pauseTracking() throws RemoteException { TrackingService.this.pauseTracking(); } public void resumeTracking(long trackId) throws RemoteException { TrackingService.this.resumeTracking(trackId); } public void stopTracking() throws RemoteException { TrackingService.this.stopTracking(); } public long trackingState() throws RemoteException { long state = TrackingService.this.trackingState(); return state; } }; public synchronized void startTracking(long trackId) { // request updates every 250 meters or 0 sec this.trackId = trackId; mLocationManager.requestLocationUpdates(LocationManager.GPS_PROVIDER, 0, 250, this); isTracking = true; } public synchronized long trackingState() { if(isTracking){ return trackId; } else return -1; } public synchronized void stopTracking() { if(isTracking){ mLocationManager.removeUpdates(this); isTracking = false; } else Log.i(TAG, "Could not stop because service is not tracking at the moment"); } public synchronized void resumeTracking(long trackId) { if(!isTracking){ this.trackId = trackId; mLocationManager.requestLocationUpdates(LocationManager.GPS_PROVIDER, 0, 250, this); isTracking = true; } else Log.i(TAG, "Could not resume because service is tracking already track " + this.trackId); } public synchronized void pauseTracking() { if(isTracking){ mLocationManager.removeUpdates(this); isTracking = false; } else Log.i(TAG, "Could not pause because service is not tracking at the moment"); } public void onLocationChanged(Location location) { //TODO } For easier access from the client I wrote a ServiceManager class which sets up the ServiceConnection and you can call the service methods. Here my code for this: public class TrackingServiceManager{ private static final String TAG = "TrackingServiceManager"; private ITrackingServiceRemote mService = null; private Context mContext; private Boolean isBound = false; private ServiceConnection mServiceConnection; public TrackingServiceManager(Context ctx){ this.mContext = ctx; } public void start(long trackId) { if (isBound && mService != null) { try { mService.startTracking(trackId); } catch (RemoteException e) { Log.e(TAG, "Could not start tracking!",e); } } else Log.i(TAG, "No Service bound! 1"); } public void stop(){ if (isBound && mService != null) { try { mService.stopTracking(); } catch (RemoteException e) { Log.e(TAG, "Could not stop tracking!",e); } } else Log.i(TAG, "No Service bound!"); } public void pause(){ if (isBound && mService != null) { try { mService.pauseTracking(); } catch (RemoteException e) { Log.e(TAG, "Could not pause tracking!",e); } } else Log.i(TAG, "No Service bound!"); } public void resume(long trackId){ if (isBound && mService != null) { try { mService.resumeTracking(trackId); } catch (RemoteException e) { Log.e(TAG, "Could not resume tracking!",e); } } else Log.i(TAG, "No Service bound!"); } public float state(){ if (isBound && mService != null) { try { return mService.trackingState(); } catch (RemoteException e) { Log.e(TAG, "Could not resume tracking!",e); return -1; } } else Log.i(TAG, "No Service bound!"); return -1; } /** * Method for binding the Service with client */ public boolean connectService(){ mServiceConnection = new ServiceConnection() { @Override public void onServiceConnected(ComponentName name, IBinder service) { TrackingServiceManager.this.mService = ITrackingServiceRemote.Stub.asInterface(service); } } @Override public void onServiceDisconnected(ComponentName name) { if (mService != null) { mService = null; } } }; Intent mIntent = new Intent("test.de.android.tracker.action.intent.TrackingService"); this.isBound = this.mContext.bindService(mIntent, mServiceConnection, Context.BIND_AUTO_CREATE); return this.isBound; } public void disconnectService(){ this.mContext.unbindService(mServiceConnection); this.isBound = false; } } If i now try to call a method from an activity for example start(trackId) nothing happens. The binding is OK. When debugging it always runs into the startTracking() in the generated ITrackingServiceRemote.java file and not into my TrackingService class. Where is the problem? I can't find anything wrong. Thanks in advance! Tobias

    Read the article

  • SQL Server 2012 Service Pack 2 is available - but there's a catch!

    - by AaronBertrand
    Service Pack 2 is available: http://www.microsoft.com/en-us/download/details.aspx?id=43340 The build number is 11.0.5058, and this includes fixes up to and including SQL Server 2012 SP1 CU #9. (The complete list of fixes is exhaustive, including all fixes from SP1 CU #1 -> #9, but the post-CU #9 fixes are listed here: http://support.microsoft.com/KB/2958429 However, if you may be affected by the regression bug I talked about earlier today , which could lead to data loss or corruption during online...(read more)

    Read the article

  • What industries develop the highest quality software? Lowest quality? Why?

    - by Derek Mahar
    From your experience, of those industries that develop custom software for internal use such as financial services companies, which ones produce higher quality software measured in defect rates and, more qualitatively, ease of maintenance over the long term? What contributes the most to this achievement of higher quality? Is it due to better software development practices such as greater emphasis on testing or specification? Developers who better understand the tools or who are strong problem solvers? Better communication between team members? On the flip-side, which industries do you think produce the lowest quality software? Why?

    Read the article

  • Create and Consume WCF service using Visual Studio 2010

    - by sreejukg
    In this article I am going to demonstrate how to create a WCF service, that can be hosted inside IIS and a windows application that consume the WCF service. To support service oriented architecture, Microsoft developed the programming model named Windows Communication Foundation (WCF). ASMX was the prior version from Microsoft, was completely based on XML and .Net framework continues to support ASMX web services in future versions also. While ASMX web services was the first step towards the service oriented architecture, Microsoft has made a big step forward by introducing WCF. An overview of planning for WCF can be found from this link http://msdn.microsoft.com/en-us/library/ff649584.aspx . The following are the important differences between WCF and ASMX from an asp.net developer point of view. 1. ASMX web services are easy to write, configure and consume 2. ASMX web services are only hosted in IIS 3. ASMX web services can only use http 4. WCF, can be hosted inside IIS, windows service, console application, WAS(Windows Process Activation Service) etc 5. WCF can be used with HTTP, TCP/IP, MSMQ and other protocols. The detailed difference between ASMX web service and WCF can be found here. http://msdn.microsoft.com/en-us/library/cc304771.aspx Though WCF is a bigger step for future, Visual Studio makes it simpler to create, publish and consume the WCF service. In this demonstration, I am going to create a service named SayHello that accepts 2 parameters such as name and language code. The service will return a hello to user name that corresponds to the language. So the proposed service usage is as follows. Caller: SayHello(“Sreeju”, “en”) -> return value -> Hello Sreeju Caller: SayHello(“???”, “ar”) -> return value -> ????? ??? Caller: SayHello(“Sreeju”, “es”) - > return value -> Hola Sreeju Note: calling an automated translation service is not the intention of this article. If you are interested, you can find bing translator API and can use in your application. http://www.microsofttranslator.com/dev/ So Let us start First I am going to create a Service Application that offer the SayHello Service. Open Visual Studio 2010, Go to File -> New Project, from your preferred language from the templates section select WCF, select WCF service application as the project type, give the project a name(I named it as HelloService), click ok so that visual studio will create the project for you. In this demonstration, I have used C# as the programming language. Visual studio will create the necessary files for you to start with. By default it will create a service with name Service1.svc and there will be an interface named IService.cs. The screenshot for the project in solution explorer is as follows Since I want to demonstrate how to create new service, I deleted Service1.Svc and IService1.cs files from the project by right click the file and select delete. Now in the project there is no service available, I am going to create one. From the solution explorer, right click the project, select Add -> New Item Add new item dialog will appear to you. Select WCF service from the list, give the name as HelloService.svc, and click on the Add button. Now Visual studio will create 2 files with name IHelloService.cs and HelloService.svc. These files are basically the service definition (IHelloService.cs) and the service implementation (HelloService.svc). Let us examine the IHelloService interface. The code state that IHelloService is the service definition and it provides an operation/method (similar to web method in ASMX web services) named DoWork(). Any WCF service will have a definition file as an Interface that defines the service. Let us see what is inside HelloService.svc The code illustrated is implementing the interface IHelloService. The code is self-explanatory; the HelloService class needs to implement all the methods defined in the Service Definition. Let me do the service as I require. Open IHelloService.cs in visual studio, and delete the DoWork() method and add a definition for SayHello(), do not forget to add OperationContract attribute to the method. The modified IHelloService.cs will look as follows Now implement the SayHello method in the HelloService.svc.cs file. Here I wrote the code for SayHello method as follows. I am done with the service. Now you can build and run the service by clicking f5 (or selecting start debugging from the debug menu). Visual studio will host the service in give you a client to test it. The screenshot is as follows. In the left pane, it shows the services available in the server and in right side you can invoke the service. To test the service sayHello, double click on it from the above window. It will ask you to enter the parameters and click on the invoke button. See a sample output below. Now I have done with the service. The next step is to write a service client. Creating a consumer application involves 2 steps. One generating the class and configuration file corresponds to the service. Create a project that utilizes the generated class and configuration file. First I am going to generate the class and configuration file. There is a great tool available with Visual Studio named svcutil.exe, this tool will create the necessary class and configuration files for you. Read the documentation for the svcutil.exe here http://msdn.microsoft.com/en-us/library/aa347733.aspx . Open Visual studio command prompt, you can find it under Start Menu -> All Programs -> Visual Studio 2010 -> Visual Studio Tools -> Visual Studio command prompt Make sure the service is in running state in visual studio. Note the url for the service(from the running window, you can right click and choose copy address). Now from the command prompt, enter the svcutil.exe command as follows. I have mentioned the url and the /d switch – for the directory to store the output files(In this case d:\temp). If you are using windows drive(in my case it is c: ) , make sure you open the command prompt with run as administrator option, otherwise you will get permission error(Only in windows 7 or windows vista). The tool has created 2 files, HelloService.cs and output.config. Now the next step is to create a new project and use the created files and consume the service. Let us do that now. I am going to add a console application to the current solution. Right click solution name in the solution explorer, right click, Add-> New Project Under Visual C#, select console application, give the project a name, I named it TestService Now navigate to d:\temp where I generated the files with the svcutil.exe. Rename output.config to app.config. Next step is to add both files (d:\temp\helloservice.cs and app.config) to the files. In the solution explorer, right click the project, Add -> Add existing item, browse to the d:\temp folder, select the 2 files as mentioned before, click on the add button. Now you need to add a reference to the System.ServiceModel to the project. From solution explorer, right click the references under testservice project, select Add reference. In the Add reference dialog, select the .Net tab, select System.ServiceModel, and click ok Now open program.cs by double clicking on it and add the code to consume the web service to the main method. The modified file looks as follows Right click the testservice project and set as startup project. Click f5 to run the project. See the sample output as follows Publishing WCF service under IIS is similar to publishing ASP.Net application. Publish the application to a folder using Visual studio publishing feature, create a virtual directory and create it as an application. Don’t forget to set the application pool to use ASP.Net version 4. One last thing you need to check is the app.config file you have added to the solution. See the element client under ServiceModel element. There is an endpoint element with address attribute that points to the published service URL. If you permanently host the service under IIS, you can simply change the address parameter to the corresponding one and your application will consume the service. You have seen how easily you can build/consume WCF service. If you need the solution in zipped format, please post your email below.

    Read the article

  • SQLAuthority News – Microsoft SQL Server 2012 Service Pack 1 Released (SP1)

    - by pinaldave
    Last week, I was attending SQLPASS 2012 and I had great fun attending the event. During the event long awaited SQL Serer 2012 Service Pack 1 was released. I am pretty excited with SP1 as new service packs are cumulative updates and upgrade all editions and service levels of SQL Server 2012 to SP1. This service pack contains SQL Server 2012 Cumulative Update 1 (CU1) and Cumulative Update 2 (CU2). The latest SP1 has many new and enhanced features. Here are a few for example: Cross-Cluster Migration of AlwaysOn Availability Groups for OS Upgrade Selective XML Index DBCC SHOW_STATISTICS works with SELECT permission New function returns statistics properties – sys.dm_db_stats_properties SSMS Complete in Express SlipStream Full Installation Business Intelligence highlights with Office and SharePoint Server 2013 Management Object Support Added for Resource Governor DDL Please note that the size of the service pack is near 1 GB. Here is the link to SQL Server 2012 Service Pack 1. SQL Server Express is the free and feature rich edition of the SQL Server. It is used with lightweight website and desktop applications. Here is the link to SQL Server 2012 EXPRESS Service Pack 1. Here is the question for you – how long have you been using SQL Server 2012? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Service Pack

    Read the article

  • Can't install software by terminal

    - by behnam mohammadi
    I don't know what packages i have installed that i no longer can get and install packages in terminal. e.g. when trying to install Prozgui, i got this error: Traceback (most recent call last): File "/usr/bin/add-apt-repository", line 60, in <module> sp = SoftwareProperties() File "/usr/lib/python2.7/dist-packages/softwareproperties/SoftwareProperties.py", line 90, in __init__ self.reload_sourceslist() File "/usr/lib/python2.7/dist-packages/softwareproperties/SoftwareProperties.py", line 538, in reload_sourceslist self.distro.get_sources(self.sourceslist) File "/usr/lib/python2.7/dist-packages/aptsources/distro.py", line 91, in get_sources raise NoDistroTemplateException("Error: could not find a " aptsources.distro.NoDistroTemplateException: Error: could not find a distribution template and it happens for all others too. Plus, my Software Center has been disabled and doesn't start. I get this error for that too: Traceback (most recent call last): File "/usr/bin/software-center", line 111, in <module> from softwarecenter.app import SoftwareCenterApp File "/usr/share/software-center/softwarecenter/app.py", line 40, in <module> from softwarecenter.db.application import Application, DebFileApplication File "/usr/share/software-center/softwarecenter/db/application.py", line 30, in <module> from softwarecenter.distro import get_distro File "/usr/share/software-center/softwarecenter/distro/__init__.py", line 151, in <module> distro_instance=_get_distro() File "/usr/share/software-center/softwarecenter/distro/__init__.py", line 140, in _get_distro module = __import__(distro_id, globals(), locals(), [], -1) ImportError: No module named OPTIMOS Can anyone please help me with this? Thank you in advance!

    Read the article

  • How can a large number of developers write software together without either a cumbersome process or

    - by Mark Robinson
    I work at a company with hundreds of people writing software for essentially the same product. The quality of the software has to be high because so many people depend on it (not least the developers themselves). Because of this every major issue has resulted in a new check - either automated or manual. As a result the process of delivering software is becoming ever more burdensome. So that requires more developers which... well you can see it is a vicious circle. We now have a problem with releasing software quickly - the lead time even to change one line of code for a very serious issue is at least one day. What techniques do you use to speed up the delivery of software in a large organization, while still maintaining software quality?

    Read the article

  • Unable to add CRM 2011's Organization service as Service Reference to VS project

    - by Scorpion
    I have problem accessing Organization Service when I try to add it as a Service Reference in Visual Studio. However, I can Access the Service in browser. I have tried to add OrganizationData service and there is no issue with that. An Error occurred while attempting to find service at 'http://xxxxxxxx/xxxxx/XRMServices/2011/Organization.svc'. Error Details There was an error downloading 'http://xxxxxxxx/xxxxx/XRMServices/2011/Organization.svc/_vti_bin/ListData.svc/$metadata'. The request failed with HTTP status 400: Bad Request. Metadata contains a reference that cannot be resolved: 'http://xxxxxxxx/xxxxx/XRMServices/2011/Organization.svc'. Metadata contains a reference that cannot be resolved: 'http://xxxxxxxx/xxxxx/XRMServices/2011/Organization.svc'. If the service is defined in the current solution, try building the solution and adding the service reference again.

    Read the article

  • Combining Shared Secret and Username Token – Azure Service Bus

    - by Michael Stephenson
    As discussed in the introduction article this walkthrough will explain how you can implement WCF security with the Windows Azure Service Bus to ensure that you can protect your endpoint in the cloud with a shared secret but also flow through a username token so that in your listening WCF service you will be able to identify who sent the message. This could either be in the form of an application or a user depending on how you want to use your token. Prerequisites Before going into the walk through I want to explain a few assumptions about the scenario we are implementing but to keep the article shorter I am not going to walk through all of the steps in how to setup some of this. In the solution we have a simple console application which will represent the client application. There is also the services WCF application which contains the WCF service we will expose via the Windows Azure Service Bus. The WCF Service application in this example was hosted in IIS 7 on Windows 2008 R2 with AppFabric Server installed and configured to auto-start the WCF listening services. I am not going to go through significant detail around the IIS setup because it should not matter in relation to this article however if you want to understand more about how to configure WCF and IIS for such a scenario please refer to the following paper which goes into a lot of detail about how to configure this. The link is: http://tinyurl.com/8s5nwrz   The Service Component To begin with let's look at the service component and how it can be configured to listen to the service bus using a shared secret but to also accept a username token from the client. In the sample the service component is called Acme.Azure.ServiceBus.Poc.UN.Services. It has a single service which is the Visual Studio template for a WCF service when you add a new WCF Service Application so we have a service called Service1 with its Echo method. Nothing special so far!.... The next step is to look at the web.config file to see how we have configured the WCF service. In the services section of the WCF configuration you can see I have created my service and I have created a local endpoint which I simply used to do a little bit of diagnostics and to check it was working, but more importantly there is the Windows Azure endpoint which is using the ws2007HttpRelayBinding (note that this should also work just the same if your using netTcpRelayBinding). The key points to note on the above picture are the service behavior called MyServiceBehaviour and the service bus endpoints behavior called MyEndpointBehaviour. We will go into these in more detail later.   The Relay Binding The relay binding for the service has been configured to use the TransportWithMessageCredential security mode. This is the important bit where the transport security really relates to the interaction between the service and listening to the Azure Service Bus and the message credential is where we will use our username token like we have specified in the message/clientCrentialType attribute. Note also that we have left the relayClientAuthenticationType set to RelayAccessToken. This means that authentication will be made against ACS for accessing the service bus and messages will not be accepted from any sender who has not been authenticated by ACS.   The Endpoint Behaviour In the below picture you can see the endpoint behavior which is configured to use the shared secret client credential for accessing the service bus and also for diagnostic purposes I have included the service registry element. Hopefully if you are familiar with using Windows Azure Service Bus relay feature the above is very familiar to you and this is a very common setup for this section. There is nothing specific to the username token implementation here. The Service Behaviour Now we come to the bit with most of the username token bits in it. When you configure the service behavior I have included the serviceCredentials element and then setup to use userNameAuthentication and you can see that I have created my own custom username token validator.   This setup means that WCF will hand off to my class for validating the username token details. I have also added the serviceSecurityAudit element to give me a simple auditing of access capability. My UsernamePassword Validator The below picture shows you the details of the username password validator class I have implemented. WCF will hand off to this class when validating the token and give me a nice way to check the token credentials against an on-premise store. You have all of the validation features with a non-service bus WCF implementation available such as validating the username password against active directory or ASP.net membership features or as in my case above something much simpler.   The Client Now let's take a look at the client side of this solution and how we can configure the client to authenticate against ACS but also send a username token over to the service component so it can implement additional security checks on-premise. I have a console application and in the program class I want to use the proxy generated with Add Service Reference to send a message via the Azure Service Bus. You can see in my WCF client configuration below I have setup my details for the azure service bus url and am using the ws2007HttpRelayBinding. Next is my configuration for the relay binding. You can see below I have configured security to use TransportWithMessageCredential so we will flow the username token with the message and also the RelayAccessToken relayClientAuthenticationType which means the component will validate against ACS before being allowed to access the relay endpoint to send a message.     After the binding we need to configure the endpoint behavior like in the below picture. This is the normal configuration to use a shared secret for accessing a Service Bus endpoint.   Finally below we have the code of the client in the console application which will call the service bus. You can see that we have created our proxy and then made a normal call to a WCF service but this time we have also set the ClientCredentials to use the appropriate username and password which will be flown through the service bus and to our service which will validate them.     Conclusion As you can see from the above walkthrough it is not too difficult to configure a service to use both a shared secret and username token at the same time. This gives you the power and protection offered by the access control service in the cloud but also the ability to flow additional tokens to the on-premise component for additional security features to be implemented. Sample The sample used in this post is available at the following location: https://s3.amazonaws.com/CSCBlogSamples/Acme.Azure.ServiceBus.Poc.UN.zip

    Read the article

  • Amazon SOA: database as a Service

    - by Martin Lee
    There is an interesting interview with Werner Vogels which is partly about how Amazon does Service Oriented Architecture: For us service orientation means encapsulating the data with the business logic that operates on the data, with the only access through a published service interface. No direct database access is allowed from outside the service, and there’s no data sharing among the services. I do not understand that. Why do they need to 'wrap' a database into some layer if it already can be consumed as a service by other service through database adaptors? Does Amazon do that just because they need to expose the database to third parties or because of anything else? Why "no direct database access is allowed"? What are the advantages of such an architectural decision?

    Read the article

  • Who are the thought leaders in software engineering/development? [closed]

    - by Mohsin Hijazee
    Possible Duplicate: What are the big contemporary names in the programming field? I am sorry if it is a duplicate questions or is useless. I want to compile a list of influential people in our industry who can be termed as "opinionated" and thought leaders. There are basically two characteristics that I'm referring to here: The person has introduced new concepts/terminology/trends or talked about existing ones in thought provoking way. Majority or part of the writings are available online. Some of the people who I think as thought leaders are as under: Martin Fowler Known for domain specific languages, Active Record, IoC. Joel Spolsky known for his 12 point Joel test, Law of Leaky abstractions. Kent Beck known for XP. Paul Graham. Any other names and links?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >