Search Results

Search found 41110 results on 1645 pages for 'oracle integration solution'.

Page 279/1645 | < Previous Page | 275 276 277 278 279 280 281 282 283 284 285 286  | Next Page >

  • Cheap Solution for Routing a Toll Free Number to a Standard POTS Number

    - by VxJasonxV
    I do some technical work for an Internet Radio Show/Podcast, and need to fix something that has been broken for a while. The hosts have a Skype-In number to take listener calls, and for convenience sake, I bought and paid for a toll free number for a period of time. I used to use Asterlink for routing calls, but they folded and sent my number to OneBox, but they're ridiculously expensive by comparison. I'm looking for a cheap solution for this one simple task. Forward toll free calls to a skype-in number. The definition of cheap is as cheap or cheaper than Asterlink was. I paid something like $2 a month, and then the termination/call rate, which was a fraction of a sent for termination, and only whole cents after some serious time on the call. A $20 preload lasted me months at a time. I don't want to be upsold too, I want a simple web based management screen (CDR/stats are fun!), and obviously, it needs to be reliable. What vendors out there are you a fan of that solves this need?

    Read the article

  • I need a reverse proxy solution for SSH

    - by Bond
    Hi here is a situation I have a server in a corporate data center for a project. I have an SSH access to this machine at port 22.There are some virtual machines running on this server and then at the back of every thing many other Operating systems are working. Now Since I am behind the data centers firewall my supervisor asked me if I can do some thing by which I can give many people on Internet access to these virtual machines directly. I know if I were allowed to get traffic on port other than 22 then I can do a port forwarding. But since I am not allowed this so what can be a solution in this case. The people who would like to connect might be complete idiots.Who may be happy just by opening putty at their machines or may be even filezilla.I have configured an Apache Reverse Proxy for redirecting the Internet traffic to the virtual machines on these hosts.But I am not clear as for SSH what can I do.So is there some thing equivalent to an Apache Reverse Proxy which can do similar work for SSH in this situation. I do not have firewall in my hands or any port other than 22 open and in fact even if I request they wont allow to open.2 times SSH is not some thing that my supervisor wants.

    Read the article

  • Cheap Solution for Routing a Toll Free Number to a Standard POTS Number

    - by VxJasonxV
    I do some technical work for an Internet Radio Show/Podcast, and need to fix something that has been broken for a while. The hosts have a Skype-In number to take listener calls, and for convenience sake, I bought and paid for a toll free number for a period of time. I used to use Asterlink for routing calls, but they folded and sent my number to OneBox, but they're ridiculously expensive by comparison. I'm looking for a cheap solution for this one simple task. Forward toll free calls to a skype-in number. The definition of cheap is as cheap or cheaper than Asterlink was. I paid something like $2 a month, and then the termination/call rate, which was a fraction of a sent for termination, and only whole cents after some serious time on the call. A $20 preload lasted me months at a time. I don't want to be upsold too, I want a simple web based management screen (CDR/stats are fun!), and obviously, it needs to be reliable. What vendors out there are you a fan of that solves this need?

    Read the article

  • How to wipe the slate clean on an Oracle installation?

    - by nw
    After a basic, successful installation of Oracle 11g, I ran dbca again to enable Enterprise Manager on my database. The operation hung at around 67% for half an hour or more, so I clicked Stop to abort the operation. Things seemed to wrap up cleanly, EM was working, all was well with the world. Then I started getting this dreaded error upon any attempt to connect in SQL*Plus: ORA-12154: TNS:could not resolve the connect identifier specified I thought perhaps the database had been corrupted due to the earlier aborted operation, so I ran dbca again and deleted the database. Then I attempted to create a new database in its stead, using a clone of the template created the first time around. Unfortunately, the clone database operation fails at 50% with the exact same error: ORA-12154: TNS:could not resolve the connect identifier specified How can I clean up the mess I have created, short of reinstalling Oracle entirely from scratch?

    Read the article

  • Best solution top keep data secure

    - by mrwooster
    What is the simplest and most elegant way of storing a small amount of data in a reasonably secure way? I am not looking for ridiculous levels of advanced encryption (AES-256 is more than enough) and I am only looking to encrypt a small number of files. The files I wish to encrypt are mostly comprised of password lists and SSH keys for servers. Unfortunately it is impossible to keep track of ever changing passwords for my servers (and SSH keys) and so need to keep a list of the passwords. Obviously this list needs to be secure, and also portable (I work from multiple locations). At the moment, I use a 10MB encrypted disk image on my mac (std .dmg AES-256) and just mount it whenever I need access to the data. To my knowledge this is very secure and I am very happy using it. However, the data is not very portable. I would like to be able to access my data from other machines (especially ones running linux), and I am aware that there are quite a few issues trying to mount an encrypted .dmg on linux. An alternative I have considered is to create a tar archive containing the files and use gpg --symmetric to encrypt it, but this is not a very elegant solution as it requires gpg to be installed on every system. So, what over solutions exist, and which ones would you consider to be the most elegant? Ty

    Read the article

  • solution for an offline server

    - by dashmug
    I'm trying to setup a development server at work that will ideally be able to test drive a couple of projects in PHP, Rails, or Django (not always running at the same time). I develop the apps locally on a Mac and then I'll put the projects up on this server for testing with my actual users (non-techies) before deploying to a production server. My problem is that we have a very poor internet connection (almost negligible) at work and doing the usual apt-get/yum/ports (make, clean, install) processes for setting up servers always get their packages from online repositories somewhere. I know I could probably download the source and then compile them myself but that's going to be too much of a hassle for me. I'm thinking about two solutions: Plan A: Run a server VM on my Mac and then use this VM as the source repository for the offline server. I've read about Ubuntu's apt-proxy and it seems to be good enough though I haven't tried it yet. I'm not sure if this is possible but can I simply do apt-get install nginx --downloadonly so that the package and its dependencies will be downloaded into my VM and my server can use the VM as the source repo for apt-get? Plan B: Run a server VM on my Mac (which I can setup/update easily when I'm home) and then clone the VM to the offline development server. Maybe I should simply make the server a VM host so I can simply copy the VM over. I think this is okay for the first-time setup but subsequent updates will take too long (cloning the VM image). If I was working on Windows, I imagine it'd be easier because most services have an installer file that I can download and then run at the server. If you could suggest another way, it would be much appreciated. Update: From Michael Hampton's answer, I found a possible solution which is apt-cacher. I also found this page on Ubuntu's website. I wonder if there is a better tool than this one.

    Read the article

  • What's the difference between instance and server process in Oracle database.

    - by Summer_More_More_Tea
    Hi folks: I'm now getting familiar with Oracle database. Unfortunately, I'm puzzled by the concept instance and server process. My question is what's the difference between instance and server process. What's more, what's the life cycle of instance and server process respectively? My textbook at hand is about Oracle 9i, which doesn't give me a clear explanation. Any reply will be appreciated. Thanks in advance. Kind regards!

    Read the article

  • RoundhousE now supports Oracle, SQL2000

    - by Robz / Fervent Coder
    RoundhousE, the database migration software that is based on sql scripts has added support for Oracle and SQL 2000.  There have also been numerous other little things, including better logging and a script run errors table. The script errors table captures what went wrong when/if your scripts are not quite up to par or there is some other issue. A special thanks goes out to http://twitter.com/PascalMestdach and http://twitter.com/jochenjonc. They worked hard on this and all I did was provide guidance and help bring it back to the trunk. This is what an entry in the database looks like: This is a preview of new log: ================================================== Versioning ================================================== Attempting to resolve version from C:\code\roundhouse\code_drop\sample\deployment\_BuildInfo.xml using //buildInfo/version. Found version 0.5.0.188 from C:\code\roundhouse\code_drop\sample\deployment\_BuildInfo.xml. Migrating TestRoundhousE from version 0 to 0.5.0.188. Versioning TestRoundhousE database with version 0.5.0.188 based on http://roundhouse.googlecode.com/svn. ================================================== Migration Scripts ================================================== Looking for Update scripts in "C:\code\roundhouse\code_drop\sample\deployment\..\db\TestRoundhousE\up". These should be one time only scripts. -------------------------------------------------- Running 0001_CreateTables.sql on (local) - TestRoundhousE. Running 0002_ChangeTable.sql on (local) - TestRoundhousE. Running 0003_TestBatchSplitter.sql on (local) - TestRoundhousE. -------------------------------------------------- But what are you waiting for? Head out and grab the latest release today!

    Read the article

  • GWTAI applet integration in GWT problem

    - by andy
    Hi everybody. I'm working with gwtai to integrate a java applet into my gwt - project. Basic communication from my main application to the applet (such as invoking simple methods that return int or boolean values) works. But the main reason why I need to integrate this applet is, that I need it to connect to another server and receive a answer and pass it to my gwt-application. So there's one basic method in the applet: public String SendAndReceive(String host, int sendPort, int receivePort, String query) that connects to the server, receives an answer and returns this answer as a string. When I now try to invoke this method like this: applet.SendAndReceive("0.0.0.0", 9099, 2000, "show streams;"); I constantly run into following error (full error message at the end): com.google.gwt.core.client.JavaScriptException: (String): Error calling method on NPObject! [plugin exception: java.security.AccessControlException: access denied (java.util.PropertyPermission * read,write)] I couldn't find a solution (for gwtai is a quite uncommon topic), what I found out (and what the Exception let's one assume) is, that there's a security problem - maybe because I'm connecting to another server. I also read something about browser's Single Origin Policy, what would point in the same direction...up to now I have never worked with java applets. So if someone has a solution or a hint I would be very thankful. If more code is helpful I can give. Thanks, Andy the full error message: 21:03:49.864 [ERROR] [follovizergwt] Unable to load module entry point class follovizer.gwt.client.FolloVizerGWT (see associated exception for details) com.google.gwt.core.client.JavaScriptException: (String): Error calling method on NPObject! [plugin exception: java.security.AccessControlException: access denied (java.util.PropertyPermission * read,write)]. at com.google.gwt.dev.shell.BrowserChannelServer.invokeJavascript(BrowserChannelServer.java:195) at com.google.gwt.dev.shell.ModuleSpaceOOPHM.doInvoke(ModuleSpaceOOPHM.java:120) at com.google.gwt.dev.shell.ModuleSpace.invokeNative(ModuleSpace.java:507) at com.google.gwt.dev.shell.ModuleSpace.invokeNativeObject(ModuleSpace.java:264) at com.google.gwt.dev.shell.JavaScriptHost.invokeNativeObject(JavaScriptHost.java:91) at follovizer.gwt.client.AnduINAppletImpl.SendAndReceive(AnduINAppletImpl.java) at follovizer.gwt.client.FolloVizerGWT.createLayout(FolloVizerGWT.java:92) at follovizer.gwt.client.FolloVizerGWT.onModuleLoad(FolloVizerGWT.java:40) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at com.google.gwt.dev.shell.ModuleSpace.onLoad(ModuleSpace.java:369) at com.google.gwt.dev.shell.OophmSessionHandler.loadModule(OophmSessionHandler.java:185) at com.google.gwt.dev.shell.BrowserChannelServer.processConnection(BrowserChannelServer.java:380) at com.google.gwt.dev.shell.BrowserChannelServer.run(BrowserChannelServer.java:222) at java.lang.Thread.run(Thread.java:619)

    Read the article

  • Server side Xforms form validation and integration into ASP.NET

    - by Nigel
    I have recently been investigating methods of creating web-based forms for an ASP.NET web application that can be edited and managed at runtime. For example an administrator might wish to add a new validation rule or a new set of fields. The holy grail would provide a means of specifying a form along with (potentially very complex) arbitrary validation rules, and allocation of data sources for each field. The specification would then be used to update the deployed form in the web application which would then validate submissions both on the client side and on the server side. My investigations led me to Xforms and a number of technologies that support it. One solution appears to be IBM Lotus Forms, but this requires a very large investment in terms of infrastructure, which makes it infeasible, although the forms designer may be useful as a stand-alone tool for creating the forms. I have also discounted browser plug-ins as the form must be publicly visible and cross-browser compliant. I have noticed that there are numerous javascript libraries that provide client side implementations given an Xforms schema. These would provide a partial solution but server side validation is still a requirement. Another option seems to involve the use of server side solutions such as the Java application Orbeon. Orbeon provides a tool for specifying the forms (although not as rich as Lotus Forms Designer), but the most interesting point is that it can translate an XForms schema into an XHTML form complete with validation. The fact that it is written in Java is not a big problem if it is possible to integrate with the existing ASP.NET application. So my question is whether anyone has done this before. It sounds like a problem that should have been solved but is inherently very complex. It seems possible to use an off-the-shelf tool to design the form and export it to an Xforms schema and xhtml form, and it seems possible to take that xforms schema and form and publish it using a client side library. What seems to be difficult is providing a means of validating the form submission on the server side and integrating the process nicely with .NET (although it seems the .NET community doesn't involve themselves with XForms; please correct me if I'm wrong on this count). I would be more than happy if a product provided something simple like a web service that could validate a submission against a schema. Maybe Orbeon does this but I'd be grateful if somebody in the know could point me in the right direction before I research it further. Many thanks.

    Read the article

  • Resharper 4.5 and dotTrace 3.1 integration problem

    - by Ken Egozi
    Hi. I am not able to get Resharper profile a unit test, although I have dotTrace installed on my machine. Actually, the dotTrace button in VisualStudio is also greyed out. the VisualStudio AddIns menu list dotTrace as started. VS2008 sp1 Windows 7 64bit R# 4.5 dotTrace 3.1 I tried Restarts, Reinstall, Re-whatever. Has anyone experienced that also? Does anyone has a solution for that?

    Read the article

  • Integration of Wordpress with asp.net

    - by Simon
    We are working with a client who has an inventory system using asp.net. We are developing his ecommerce solution within Wordpress and this needs to be updated by the asp.net inventory system ? Ideas on how this can be integrated would be appreciated !

    Read the article

  • "[INS-30131] Initial setup required for the execution of installer validations failed." Encountering this error while installing Oracle database 12c. [on hold]

    - by user132992
    I am trying to install Oracle 12c database on my machine running Fedora 20. And I am encountering this problem: "[INS-30131] Initial setup required for the execution of installer validations failed." And when we see the details then it is like this: Cause - Failed to access the temporary location. Action - Ensure that the current user has required permissions to access the temporary location. Additional Information: - Framework setup check failed on all the nodes - Cause: Cause Of Problem Not Available   - Action: User Action Not Available Summary of the failed nodes fedora - Version of exectask could not be retrieved from node "fedora"   - Cause: Cause Of Problem Not Available   - Action: User Action Not Available To eliminate this error I have tried various measures including the change of permission of the tmp folder and restarting the computer but none is working. Plz someone help me out of this. Any kind of help will be appreciated...

    Read the article

  • Stuck with Documentum Still? Do MORE with Oracle WebCenter!

    - by Michael Snow
    WEBCAST TODAY!! 03/22/12 Do you need to lower costs? Raise Productivity? Foster Innovation? Improve Online Engagement? But you’re still stuck with Documentum? Step away from the ledge – there is hope – let us help you. Top 4 Content Imperatives · Lower Costs - Reduce labor, maintenance fees, storage and electrical consumption · Raise Productivity - Automation and integration, communication, findability · Foster Innovation - Enable collaboration, expertise location · Improve Online Engagement – enable user-driven, dynamic marketing initiatives With the coming technology wave we see four content imperatives. Every organization has had to reduce costs, cost cutting has become a way of life. Everyone is working three jobs as positions are eliminated. And so we have to reduce labor, reduce maintenance, and reduce money we are wasting on things like storing content that is redundant or no longer useful. We also, to fill that gap, need to raise productivity. Knowledge workers represent the fastest growing segment of the workforce, accounting for 40%-75% of the employees at organizations in sectors like financial services, life sciences, healthcare and retail.  What’s more, their wages total 18 percent of the United States GDP. And so we can’t afford information systems that don’t let our top performers be the best they can be. We look to automate the content processes, provide ways to integrate that content into our processes, provide communication to make decisions, and to make content more findable so people can make the right decision and move the process forward. And really to get ourselves out of the current financial status, we can only cut costs so far. We have to innovate out of economic tough times – to find new products and new markets. And to enable the innovation process, we have to enable collaboration and expertise location. So much of innovation is about building on innovations that have come before. To solve problems, we have to be able to find what our organization has already created. We find that problems we need to solve have already been solved if we can find the right document, the right person. So we have to provide systems that enable us to stand on the shoulders of our organization’s accomplishments. Good content drives great marketing. Online engagement is growing as an absolute necessity for modern growing marketing organizations that require the business users be enabled for dynamic marketing content creation, updates and targeted content creation and management. Unfortunately – if you are currently stuck with Documentum, you are really lacking in your Web Experience Management capabilities. Documentum previously used FatWire for web publishing. Now FatWire is part of Oracle. Oracle provides powerful web engagement capabilities: Increase sales and loyalty by optimizing online engagement Create, manage and moderate contextually relevant, targeted and interactive online experiences Optimize customer engagement across, web, mobile and social channels Manage large scale multichannel global online presence with integration to enterprise applications Enable business users to control their content and make their own updates Publish content from native files – enable navigation of project documents, procedures, policy information Enable content display and updates from existing web applications – one click to drag and drop content management functionality So you get the ability to self-publish information and make it navigable, to move the process of publishing from IT to business users, and the ability to address a whole new area of user engagement with web experience management. So… if you are still stuck with Documentum and don’t know what to do – contact us – not only will Oracle help you step away from the ledge, but also with the MoveOff Documentum program, we are offering you a way – trade-in your Documentum licenses for a 100% credit on Oracle WebCenter. How’s that for a nice bonus? It’s time to stop maintaining Documentum, and to start innovating with Oracle WebCenter. Learn More Here! To learn more about what Oracle WebCenter can offer you today – join us for a webcast – your eyes will be opened to all that’s possible. Do More with WebCenter: Extend Beyond Content Management

    Read the article

  • cx_Oracle makes subprocess give OSError

    - by Shrikant Sharat
    I am trying to use the cx_Oracle module with python 2.6.6 on ubuntu Maverick, with Oracle 11gR2 Enterprise edition. I am able to connect to my oracle db just fine, but once I do that, the subprocess module does not work anymore. Here is an iPython session that reproduces the problem... In [1]: import subprocess as sp, cx_Oracle as dbh In [2]: sp.call(['whoami']) sharat Out[2]: 0 In [3]: con = dbh.connect('system', 'password') In [4]: con.close() In [5]: sp.call(['whomai']) --------------------------------------------------------------------------- OSError Traceback (most recent call last) /home/sharat/desk/calypso-launcher/<ipython console> in <module>() /usr/lib/python2.6/subprocess.pyc in call(*popenargs, **kwargs) 468 retcode = call(["ls", "-l"]) 469 """ --> 470 return Popen(*popenargs, **kwargs).wait() 471 472 /usr/lib/python2.6/subprocess.pyc in __init__(self, args, bufsize, executable, stdin, stdout, stderr, preexec_fn, close_fds, shell, cwd, env, universal_newlines, startupinfo, creationflags) 621 p2cread, p2cwrite, 622 c2pread, c2pwrite, --> 623 errread, errwrite) 624 625 if mswindows: /usr/lib/python2.6/subprocess.pyc in _execute_child(self, args, executable, preexec_fn, close_fds, cwd, env, universal_newlines, startupinfo, creationflags, shell, p2cread, p2cwrite, c2pread, c2pwrite, errread, errwrite) 1134 1135 if data != "": -> 1136 _eintr_retry_call(os.waitpid, self.pid, 0) 1137 child_exception = pickle.loads(data) 1138 for fd in (p2cwrite, c2pread, errread): /usr/lib/python2.6/subprocess.pyc in _eintr_retry_call(func, *args) 453 while True: 454 try: --> 455 return func(*args) 456 except OSError, e: 457 if e.errno == errno.EINTR: OSError: [Errno 10] No child processes So, the call to sp.call works fine before connecting to oracle, but breaks after that. Even if I have closed the connection to the database. Looking around, I found http://bugs.python.org/issue1731717 as somewhat related to this issue, but I am not dealing with threads here. I don't know if cx_Oracle is. Moreover, the above issue mentions that adding a time.sleep(1) fixes it, but it didn't help me. Any help appreciated. Thanks.

    Read the article

  • The Application Architecture Domain

    - by Michael Glas
    I have been spending a lot of time thinking about Application Architecture in the context of EA. More specifically, as an Enterprise Architect, what do I need to consider when looking at/defining/designing the Application Architecture Domain?There are several definitions of Application Architecture. TOGAF says “The objective here [in Application Architecture] is to define the major kinds of application system necessary to process the data and support the business”. FEA says the Application Architecture “Defines the applications needed to manage the data and support the business functions”.I agree with these definitions. They reflect what the Application Architecture domain does. However, they need to be decomposed to be practical.I find it useful to define a set of views into the Application Architecture domain. These views reflect what an EA needs to consider when working with/in the Applications Architecture domain. These viewpoints are, at a high level:Capability View: This view reflects how applications alignment with business capabilities. It is a super set of the following views when viewed in aggregate. By looking at the Application Architecture domain in terms of the business capabilities it supports, you get a good perspective on how those applications are directly supporting the business.Technology View: The technology view reflects the underlying technology that makes up the applications. Based on the number of rationalization activities I have seen (more specifically application rationalization), the phrase “complexity equals cost” drives the importance of the technology view, especially when attempting to reduce that complexity through standardization type activities. Some of the technology components to be considered are: Software: The application itself as well as the software the application relies on to function (web servers, application servers). Infrastructure: The underlying hardware and network components required by the application and supporting application software. Development: How the application is created and maintained. This encompasses development components that are part of the application itself (i.e. customizable functions), as well as bolt on development through web services, API’s, etc. The maintenance process itself also falls under this view. Integration: The interfaces that the application provides for integration as well as the integrations to other applications and data sources the application requires to function. Type: Reflects the kind of application (mash-up, 3 tiered, etc). (Note: functional type [CRM, HCM, etc.] are reflected under the capability view). Organization View: Organizations are comprised of people and those people use applications to do their jobs. Trying to define the application architecture domain without taking the organization that will use/fund/change it into consideration is like trying to design a car without thinking about who will drive it (i.e. you may end up building a formula 1 car for a family of 5 that is really looking for a minivan). This view reflects the people aspect of the application. It includes: Ownership: Who ‘owns’ the application? This will usually reflect primary funding and utilization but not always. Funding: Who funds both the acquisition/creation as well as the on-going maintenance (funding to create/change/operate)? Change: Who can/does request changes to the application and what process to the follow? Utilization: Who uses the application, how often do they use it, and how do they use it? Support: Which organization is responsible for the on-going support of the application? Information View: Whether or not you subscribe to the view that “information drives the enterprise”, it is a fact that information is critical. The management, creation, and organization of that information are primary functions of enterprise applications. This view reflects how the applications are tied to information (or at a higher level – how the Application Architecture domain relates to the Information Architecture domain). It includes: Access: The application is the mechanism by which end users access information. This could be through a primary application (i.e. CRM application), or through an information access type application (a BI application as an example). Creation: Applications create data in order to provide information to end-users. (I.e. an application creates an order to be used by an end-user as part of the fulfillment process). Consumption: Describes the data required by applications to function (i.e. a product id is required by a purchasing application to create an order. Application Service View: Organizations today are striving to be more agile. As an EA, I need to provide an architecture that supports this agility. One of the primary ways to achieve the required agility in the application architecture domain is through the use of ‘services’ (think SOA, web services, etc.). Whether it is through building applications from the ground up utilizing services, service enabling an existing application, or buying applications that are already ‘service enabled’, compartmentalizing application functions for re-use helps enable flexibility in the use of those applications in support of the required business agility. The applications service view consists of: Services: Here, I refer to the generic definition of a service “a set of related software functionalities that can be reused for different purposes, together with the policies that should control its usage”. Functions: The activities within an application that are not available / applicable for re-use. This view is helpful when identifying duplication functions between applications that are not service enabled. Delivery Model View: It is hard to talk about EA today without hearing the terms ‘cloud’ or shared services.  Organizations are looking at the ways their applications are delivered for several reasons, to reduce cost (both CAPEX and OPEX), to improve agility (time to market as an example), etc.  From an EA perspective, where/how an application is deployed has impacts on the overall enterprise architecture. From integration concerns to SLA requirements to security and compliance issues, the Enterprise Architect needs to factor in how applications are delivered when designing the Enterprise Architecture. This view reflects how applications are delivered to end-users. The delivery model view consists of different types of delivery mechanisms/deployment options for applications: Traditional: Reflects non-cloud type delivery options. The most prevalent consists of an application running on dedicated hardware (usually specific to an environment) for a single consumer. Private Cloud: The application runs on infrastructure provisioned for exclusive use by a single organization comprising multiple consumers. Public Cloud: The application runs on infrastructure provisioned for open use by the general public. Hybrid: The application is deployed on two or more distinct cloud infrastructures (private, community, or public) that remain unique entities, but are bound together by standardized or proprietary technology that enables data and application portability. While by no means comprehensive, I find that applying these views to the application domain gives a good understanding of what an EA needs to consider when effecting changes to the Application Architecture domain.Finally, the application architecture domain is one of several architecture domains that an EA must consider when developing an overall Enterprise Architecture. The Oracle Enterprise Architecture Framework defines four Primary domains: Business Architecture, Application Architecture, Information Architecture, and Technology Architecture. Each domain links to the others either directly or indirectly at some point. Oracle links them at a high level as follows:Business Capabilities and/or Business Processes (Business Architecture), links to the Applications that enable the capability/process (Applications Architecture – COTS, Custom), links to the Information Assets managed/maintained by the Applications (Information Architecture), links to the technology infrastructure upon which all this runs (Technology Architecture - integration, security, BI/DW, DB infrastructure, deployment model). There are however, times when the EA needs to narrow focus to a particular domain for some period of time. These views help me to do just that.

    Read the article

  • What's the fastest filesystem for developer builds?

    - by Dan Fabulich
    I'm putting together a Linux box that will act as a continuous integration build server; we'll mostly build Java stuff, but I think this question applies to any compiled language. What filesystem and configuration settings should I use? (For example, I know I won't need atime for this!) The build server will spend a lot of time reading and writing small files, and scanning directories to see which files have been modified.

    Read the article

  • Which web services should be integrated?

    - by Adam Matan
    Mail + Social networking? One identity for all sites? Integration between all social networks and IM services? Which web services should be integrated in the future, and why? Edit: Clarification: By "integrating", I mean that two or more services should be seamlessly connected, and that connection would benefit the user. Foe example, I would really like to have an IM application that would support many accounts on many IM providers (And IMHO, Pidgin's not ready yet).

    Read the article

  • recommendations for efficient offsite remote backup solution of vm's

    - by senorsmile
    I am looking for recommendations for backing up my current 6 vm's(and soon to grow to up to 20). Currently I am running a two node proxmox cluster(which is a debian base using kvm for virtualization with a custom web front end to administer). I have two nearly identical boxes with amd phenom II x4's and asus motherboards. Each has 4 500 GB sata2 hdd's, 1 for the os and other data for the proxmox install, and 3 using mdadm+drbd+lvm to share the 1.5 TB's of storage between the two machines. I mount lvm images to kvm for all of the virtual machines. I currently have the ability to do live transfer from one machine to the other, typically within seconds(it takes about 2 minutes on the largest vm running win2008 with m$ sql server). I am using proxmox's built-in vzdump utility to take snapshots of the vm's and store those on an external harddrive on the network. I then have jungledisk service (using rackspace) to sync the vzdump folder for remote offsite backup. This is all fine and dandy, but it's not very scalable. For one, the backups themselves can take up to a few hours every night. With jungledisk's block level incremental transfers, the sync only transfers a small portion of the data offsite, but that still takes at least a half an hour. The much better solution would of course be something that allows me to instantly take the difference of two time points (say what was written from 6am to 7am), zip it, then send that difference file to the backup server which would instantly transfer to the remote storage on rackspace. I have looked a little into zfs and it's ability to do send/receive. That coupled with a pipe of the data in bzip or something would seem perfect. However, it seems that implementing a nexenta server with zfs would essentially require at least one or two more dedicated storage servers to serve iSCSI block volumes (via zvol's???) to the proxmox servers. I would prefer to keep the setup as minimal as possible (i.e. NOT having separate storage servers) if at all possible. I have also briefly read about zumastor. It looks like it could also do what I want, but it appears to have halted development in 2008. So, zfs, zumastor or other?

    Read the article

  • pro*C in oracle XE

    - by srandpersonia
    I downloaded the free express edition of oracle, Oracle XE. I couldn't find the pro*c compiler in this edition. I read somewhere that oracle 9i client has pro*C, So I presumed that oracle client for 10g XE should have it too and downloaded it. But to my disappointment, I can't find it there too. :(. Is there a way to download the older oracle 9i and use it connect to 10g XE without any compatibility problems?. Or is it possible to download the pro*C compiler alone?. I don't want to download the standard editions as they are too large(2 GB). Thanks.

    Read the article

  • recommendations for efficient offsite remote backup solution of vm's

    - by senorsmile
    I am looking for recommendations for backing up my current 6 vm's(and soon to grow to up to 20). Currently I am running a two node proxmox cluster(which is a debian base using kvm for virtualization with a custom web front end to administer). I have two nearly identical boxes with amd phenom II x4's and asus motherboards. Each has 4 500 GB sata2 hdd's, 1 for the os and other data for the proxmox install, and 3 using mdadm+drbd+lvm to share the 1.5 TB's of storage between the two machines. I mount lvm images to kvm for all of the virtual machines. I currently have the ability to do live transfer from one machine to the other, typically within seconds(it takes about 2 minutes on the largest vm running win2008 with m$ sql server). I am using proxmox's built-in vzdump utility to take snapshots of the vm's and store those on an external harddrive on the network. I then have jungledisk service (using rackspace) to sync the vzdump folder for remote offsite backup. This is all fine and dandy, but it's not very scalable. For one, the backups themselves can take up to a few hours every night. With jungledisk's block level incremental transfers, the sync only transfers a small portion of the data offsite, but that still takes at least a half an hour. The much better solution would of course be something that allows me to instantly take the difference of two time points (say what was written from 6am to 7am), zip it, then send that difference file to the backup server which would instantly transfer to the remote storage on rackspace. I have looked a little into zfs and it's ability to do send/receive. That coupled with a pipe of the data in bzip or something would seem perfect. However, it seems that implementing a nexenta server with zfs would essentially require at least one or two more dedicated storage servers to serve iSCSI block volumes (via zvol's???) to the proxmox servers. I would prefer to keep the setup as minimal as possible (i.e. NOT having separate storage servers) if at all possible. I have also briefly read about zumastor. It looks like it could also do what I want, but it appears to have halted development in 2008. So, zfs, zumastor or other?

    Read the article

< Previous Page | 275 276 277 278 279 280 281 282 283 284 285 286  | Next Page >