Search Results

Search found 14292 results on 572 pages for 'high integrity systems'.

Page 289/572 | < Previous Page | 285 286 287 288 289 290 291 292 293 294 295 296  | Next Page >

  • XBRL - Moving from Production to Consumption

    - by jmorourke
    Here's an update on what’s new with XBRL and how it can actually benefit your organization versus adding extra time and costs to financial reporting.  On February 29th (leap day) of 2012 I attended the XBRL and Financial Analysis Technology Conference at Baruch College in NYC.  The event, which attracted over 300 XBRL gurus and fans was presented by XBRL US, The New York Society of Security Analysts’ Improved Corporate Reporting Committee, and Baruch College’s Robert Zicklin Center for Corporate Integrity.  The event featured keynotes from the U.S. Securities and Exchange Commission (SEC), and the CFA Institute as well as panels covering alternative research tools and data, corporate reporting to stakeholders and a demonstration of XBRL analysis tools.  The program culminated in a presentation of the finalists and the winner of the $20,000 XBRL Challenge.    Some of the key points made in the sessions included: The focus of XBRL tools is moving from production to consumption. As of February 2012, over 9000 companies are reporting in XBRL, with over 10 million facts filed to date XBRL taxonomy extensions have dropped from 27% to 11% making comparisons easier The SEC reports that XBRL makes it easier to analyze disclosures, focus on accounting issues XBRL is helping standards-setters like the FASB speed their analysis of impacts of proposed accounting rule changes Companies like Thomson Reuters report that XBRL is helping speed the delivery of data to clients The most interesting part of the program though, was the session highlighting the 5 finalists in the XBRL Challenge competition and the winning solution.  The XBRL Challenge was launched in 2011 as a means of spurring the development of more end-user tools to help with the consumption of XBRL-based financial information.       Over an 8-month process handled by 5 judges, there were 84 registrants, 15 completed submissions, 5 finalists and one winner of the challenge.  All of the solutions are open-sourced tools and most of them focus on consuming XBRL-based data.  The 5 finalists included: Advanced XBRL Processing from Oxide solutions – XBRL viewer for taxonomies, filings and company data with peer comparison capabilities. Arrelle – API for XBRL processes, supports SEC Validations, RSS Feeds to access filings etc. Calcbench – XBRL data analysis tool that can be embedded in other web applications.  This tool can combine XBRL filings with real-time market data. XBRL to XL – allows the importing of XBRL data into Microsoft Excel for analysis, comparisons.  Users start on the web and populate Excel with XBRL data. XBurble – allows users to search and view XBRL filings, export to Excel, merge for comparison, and includes a workflow interface. The winner of the $20,000 XBRL Challenge prize was CalcBench.  More information about the XBRL Challenge and the finalists can be found at www.XBRLUS.org/challenge XBRL for Sustainability Reporting – other recent news on the XBRL front was the announcement by the Global Reporting Initiative (GRI) of an XBRL taxonomy for Sustainability Reporting.  This taxonomy was co-developed by the GRI and Deloitte and is designed to make the consumption of data found in Sustainability Reports much easier.  Although there is no government mandate to file Sustainability Reports in XBRL format, organizations that do use the GRI guidelines for Sustainability Reporting are encouraged to tag and submit their data voluntarily to the GRI – who will populate a database with Sustainability Reporting data and make this available to the public.  For more information about this initiative, you can go to the GRI web site:  www.globalreporting.org. So how does all of this benefit corporate filers and investors?  Since its introduction, the consensus in the market is that XBRL has mainly benefited the regulators and investment analysts who need to consume and analyze large volumes of financial data.  But with the emergence of more end-user tools for consuming and analyzing XBRL-based data, and the ability to perform quick comparisons of one company versus its peers and competitors in an industry group, will soon accelerate the benefits to corporate finance staff, as well as individual investors.  This could apply to financial results tagged in XBRL, as well as non-financial information such as Sustainability Reporting – which over the long-term will likely be integrated with financial reporting.   And as multiple regulators and agencies in a country adopt the XBRL standard for corporate filings, more benefits will accrue as companies will be able to leverage one set of XBRL-based financial data for multiple regulatory filings.     For more information about the latest developments in XBRL, check out the XBRL US or XBRL International web sites:  www.xbrl.org, www.xbrlus.org. For more information about what Oracle is doing to support XBRL, here are some links: http://www.oracle.com/us/solutions/ent-performance-bi/disclosure-management-065892.html http://www.oracle.com/technetwork/database/features/xmldb/index-087631.html Feel free to contact me if you have any questions or need more information:  [email protected]

    Read the article

  • CPU Usage in Very Large Coherence Clusters

    - by jpurdy
    When sizing Coherence installations, one of the complicating factors is that these installations (by their very nature) tend to be application-specific, with some being large, memory-intensive caches, with others acting as I/O-intensive transaction-processing platforms, and still others performing CPU-intensive calculations across the data grid. Regardless of the primary resource requirements, Coherence sizing calculations are inherently empirical, in that there are so many permutations that a simple spreadsheet approach to sizing is rarely optimal (though it can provide a good starting estimate). So we typically recommend measuring actual resource usage (primarily CPU cycles, network bandwidth and memory) at a given load, and then extrapolating from those measurements. Of course there may be multiple types of load, and these may have varying degrees of correlation -- for example, an increased request rate may drive up the number of objects "pinned" in memory at any point, but the increase may be less than linear if those objects are naturally shared by concurrent requests. But for most reasonably-designed applications, a linear resource model will be reasonably accurate for most levels of scale. However, at extreme scale, sizing becomes a bit more complicated as certain cluster management operations -- while very infrequent -- become increasingly critical. This is because certain operations do not naturally tend to scale out. In a small cluster, sizing is primarily driven by the request rate, required cache size, or other application-driven metrics. In larger clusters (e.g. those with hundreds of cluster members), certain infrastructure tasks become intensive, in particular those related to members joining and leaving the cluster, such as introducing new cluster members to the rest of the cluster, or publishing the location of partitions during rebalancing. These tasks have a strong tendency to require all updates to be routed via a single member for the sake of cluster stability and data integrity. Fortunately that member is dynamically assigned in Coherence, so it is not a single point of failure, but it may still become a single point of bottleneck (until the cluster finishes its reconfiguration, at which point this member will have a similar load to the rest of the members). The most common cause of scaling issues in large clusters is disabling multicast (by configuring well-known addresses, aka WKA). This obviously impacts network usage, but it also has a large impact on CPU usage, primarily since the senior member must directly communicate certain messages with every other cluster member, and this communication requires significant CPU time. In particular, the need to notify the rest of the cluster about membership changes and corresponding partition reassignments adds stress to the senior member. Given that portions of the network stack may tend to be single-threaded (both in Coherence and the underlying OS), this may be even more problematic on servers with poor single-threaded performance. As a result of this, some extremely large clusters may be configured with a smaller number of partitions than ideal. This results in the size of each partition being increased. When a cache server fails, the other servers will use their fractional backups to recover the state of that server (and take over responsibility for their backed-up portion of that state). The finest granularity of this recovery is a single partition, and the single service thread can not accept new requests during this recovery. Ordinarily, recovery is practically instantaneous (it is roughly equivalent to the time required to iterate over a set of backup backing map entries and move them to the primary backing map in the same JVM). But certain factors can increase this duration drastically (to several seconds): large partitions, sufficiently slow single-threaded CPU performance, many or expensive indexes to rebuild, etc. The solution of course is to mitigate each of those factors but in many cases this may be challenging. Larger clusters also lead to the temptation to place more load on the available hardware resources, spreading CPU resources thin. As an example, while we've long been aware of how garbage collection can cause significant pauses, it usually isn't viewed as a major consumer of CPU (in terms of overall system throughput). Typically, the use of a concurrent collector allows greater responsiveness by minimizing pause times, at the cost of reducing system throughput. However, at a recent engagement, we were forced to turn off the concurrent collector and use a traditional parallel "stop the world" collector to reduce CPU usage to an acceptable level. In summary, there are some less obvious factors that may result in excessive CPU consumption in a larger cluster, so it is even more critical to test at full scale, even though allocating sufficient hardware may often be much more difficult for these large clusters.

    Read the article

  • ADF Reusable Artefacts

    - by Arda Eralp
    Primary reusable ADF Business Component: Entity Objects (EOs) View Objects (VOs) Application Modules (AMs) Framework Extensions Classes Primary reusable ADF Controller: Bounded Task Flows (BTFs) Task Flow Templates Primary reusable ADF Faces: Page Templates Skins Declarative Components Utility Classes Certain components will often be used more than once. Whether the reuse happens within the same application, or across different applications, it is often advantageous to package these reusable components into a library that can be shared between different developers, across different teams, and even across departments within an organization. In the world of Java object-oriented programming, reusing classes and objects is just standard procedure. With the introduction of the model-view-controller (MVC) architecture, applications can be further modularized into separate model, view, and controller layers. By separating the data (model and business services layers) from the presentation (view and controller layers), you ensure that changes to any one layer do not affect the integrity of the other layers. You can change business logic without having to change the UI, or redesign the web pages or front end without having to recode domain logic. Oracle ADF and JDeveloper support the MVC design pattern. When you create an application in JDeveloper, you can choose many application templates that automatically set up data model and user interface projects. Because the different MVC layers are decoupled from each other, development can proceed on different projects in parallel and with a certain amount of independence. ADF Library further extends this modularity of design by providing a convenient and practical way to create, deploy, and reuse high-level components. When you first design your application, you design it with component reusability in mind. If you created components that can be reused, you can package them into JAR files and add them to a reusable component repository. If you need a component, you may look into the repository for those components and then add them into your project or application. For example, you can create an application module for a domain and package it to be used as the data model project in several different applications. Or, if your application will be consuming components, you may be able to load a page template component from a repository of ADF Library JARs to create common look and feel pages. Then you can put your page flow together by stringing together several task flow components pulled from the library. An ADF Library JAR contains ADF components and does not, and cannot, contain other JARs. It should not be confused with the JDeveloper library, Java EE library, or Oracle WebLogic shared library. Reusable Component Description Data Control Any data control can be packaged into an ADF Library JAR. Some of the data controls supported by Oracle ADF include application modules, Enterprise JavaBeans, web services, URL services, JavaBeans, and placeholder data controls. Application Module When you are using ADF Business Components and you generate an application module, an associated application module data control is also generated. When you package an application module data control, you also package up the ADF Business Components associated with that application module. The relevant entity objects, view objects, and associations will be a part of the ADF Library JAR and available for reuse. Business Components Business components are the entity objects, view objects, and associations used in the ADF Business Components data model project. You can package business components by themselves or together with an application module. Task Flows & Task Flow Templates Task flows can be packaged into an ADF Library JAR for reuse. If you drop a bounded task flow that uses page fragments, JDeveloper adds a region to the page and binds it to the dropped task flow. ADF bounded task flows built using pages can be dropped onto pages. The drop will create a link to call the bounded task flow. A task flow call activity and control flow will automatically be added to the task flow, with the view activity referencing the page. If there is more than one existing task flow with a view activity referencing the page, it will prompt you to select the one to automatically add a task flow call activity and control flow. If an ADF task flow template was created in the same project as the task flow, the ADF task flow template will be included in the ADF Library JAR and will be reusable. Page Templates You can package a page template and its artifacts into an ADF Library JAR. If the template uses image files and they are included in a directory within your project, these files will also be available for the template during reuse. Declarative Components You can create declarative components and package them for reuse. The tag libraries associated with the component will be included and loaded into the consuming project. You can also package up projects that have several different reusable components if you expect that more than one component will be consumed. For example, you can create a project that has both an application module and a bounded task flow. When this ADF Library JAR file is consumed, the application will have both the application module and the task flow available for use. You can package multiple components into one JAR file, or you can package a single component into a JAR file. Oracle ADF and JDeveloper give you the option and flexibility to create reusable components that best suit you and your organization. You create a reusable component by using JDeveloper to package and deploy the project that contains the components into a ADF Library JAR file. You use the components by adding that JAR to the consuming project. At design time, the JAR is added to the consuming project's class path and so is available for reuse. At runtime, the reused component runs from the JAR file by reference.

    Read the article

  • Coherence Data Guarantees for Data Reads - Basic Terminology

    - by jpurdy
    When integrating Coherence into applications, each application has its own set of requirements with respect to data integrity guarantees. Developers often describe these requirements using expressions like "avoiding dirty reads" or "making sure that updates are transactional", but we often find that even in a small group of people, there may be a wide range of opinions as to what these terms mean. This may simply be due to a lack of familiarity, but given that Coherence sits at an intersection of several (mostly) unrelated fields, it may be a matter of conflicting vocabularies (e.g. "consistency" is similar but different in transaction processing versus multi-threaded programming). Since almost all data read consistency issues are related to the concept of concurrency, it is helpful to start with a definition of that, or rather what it means for two operations to be concurrent. Rather than implying that they occur "at the same time", concurrency is a slightly weaker statement -- it simply means that it can't be proven that one event precedes (or follows) the other. As an example, in a Coherence application, if two client members mutate two different cache entries sitting on two different cache servers at roughly the same time, it is likely that one update will precede the other by a significant amount of time (say 0.1ms). However, since there is no guarantee that all four members have their clocks perfectly synchronized, and there is no way to precisely measure the time it takes to send a given message between any two members (that have differing clocks), we consider these to be concurrent operations since we can not (easily) prove otherwise. So this leads to a question that we hear quite frequently: "Are the contents of the near cache always synchronized with the underlying distributed cache?". It's easy to see that if an update on a cache server results in a message being sent to each near cache, and then that near cache being updated that there is a window where the contents are different. However, this is irrelevant, since even if the application reads directly from the distributed cache, another thread update the cache before the read is returned to the application. Even if no other member modifies a cache entry prior to the local near cache entry being updated (and subsequently read), the purpose of reading a cache entry is to do something with the result, usually either displaying for consumption by a human, or by updating the entry based on the current state of the entry. In the former case, it's clear that if the data is updated faster than a human can perceive, then there is no problem (and in many cases this can be relaxed even further). For the latter case, the application must assume that the value might potentially be updated before it has a chance to update it. This almost aways the case with read-only caches, and the solution is the traditional optimistic transaction pattern, which requires the application to explicitly state what assumptions it made about the old value of the cache entry. If the application doesn't want to bother stating those assumptions, it is free to lock the cache entry prior to reading it, ensuring that no other threads will mutate the entry, a pessimistic approach. The optimistic approach relies on what is sometimes called a "fuzzy read". In other words, the application assumes that the read should be correct, but it also acknowledges that it might not be. (I use the qualifier "sometimes" because in some writings, "fuzzy read" indicates the situation where the application actually sees an original value and then later sees an updated value within the same transaction -- however, both definitions are roughly equivalent from an application design perspective). If the read is not correct it is called a "stale read". Going back to the definition of concurrency, it may seem difficult to precisely define a stale read, but the practical way of detecting a stale read is that is will cause the encompassing transaction to roll back if it tries to update that value. The pessimistic approach relies on a "coherent read", a guarantee that the value returned is not only the same as the primary copy of that value, but also that it will remain that way. In most cases this can be used interchangeably with "repeatable read" (though that term has additional implications when used in the context of a database system). In none of cases above is it possible for the application to perform a "dirty read". A dirty read occurs when the application reads a piece of data that was never committed. In practice the only way this can occur is with multi-phase updates such as transactions, where a value may be temporarily update but then withdrawn when a transaction is rolled back. If another thread sees that value prior to the rollback, it is a dirty read. If an application uses optimistic transactions, dirty reads will merely result in a lack of forward progress (this is actually one of the main risks of dirty reads -- they can be chained and potentially cause cascading rollbacks). The concepts of dirty reads, fuzzy reads, stale reads and coherent reads are able to describe the vast majority of requirements that we see in the field. However, the important thing is to define the terms used to define requirements. A quick web search for each of the terms in this article will show multiple meanings, so I've selected what are generally the most common variations, but it never hurts to state each definition explicitly if they are critical to the success of a project (many applications have sufficiently loose requirements that precise terminology can be avoided).

    Read the article

  • World Backup Day

    - by red(at)work
    Here at Red Gate Towers, the SQL Backup development team have been hunkered down in their shed for the last few months, with the toolbox, blowtorch and chamois leather out, upgrading SQL Backup. When we started, autumn leaves were falling. Now we're about to finish, spring flowers are budding. If not quite a gleaming new machine, at the very least a familiar, reliable engine with some shiny new bits on it will trundle magnificently out of the workshop. One of the interesting things I've noticed about working on software development teams is that the team is together for so long 'implementing' stuff - designing, coding, testing, fixing bugs and so on - that you occasionally forget why you're doing what you're doing. Doubt creeps in. It feels like a long time since we launched this project in a fanfare of optimism and enthusiasm, and all that clarity of purpose and mission "yee-haw" has dissipated with the daily pressures of development. Every now and again, we look up from our bunker and notice all those thousands of users out there, with their different configurations and working practices and each with their own set of problems and requirements, and we ask ourselves "does anyone care about what we're doing?" Has the world moved on while we've been busy? Could we have been doing something more useful with the time and talent of all these excellent people we've assembled? In truth, you can research and test and validate all you like, but you never really know if you've done the right thing (or at least, something valuable for some users) until you release. All projects suffer this insecurity. If they don't, maybe you're not worrying enough about what you're building. The two enemies of software development are certainty and complacency. Oh, and of course, rival teams with Nerf guns. The goal of SQL Backup 7 is to make it so easy to schedule regular restores of your backups that you have no excuse not to. Why schedule a restore? Because your data is not as good as your last backup. It's only as good as your last successful restore. If you're not checking your backups by restoring them and running an integrity check on the database, you're only doing half the job. It seems that most DBAs know that this is best practice, but it can be tricky and time-consuming to set up, so it's one of those tasks that can get forgotten in the midst all the other demands on their time. Sometimes, they're just too busy firefighting. But if it was simple to do? That was our inspiration for SQL Backup 7. So it was heartening to read Brent Ozar's blog post the other day about World Backup Day. To be honest, I'd never heard of World Backup Day (Talk Like a Pirate Day, yes, but not this one); however, its emphasis on not just backing up your data but checking the validity of those backups was exactly the same message we had in mind when building SQL Backup 7. It's printed on a piece of A3 above our planning board - "Make backup verification so easy to do that no DBA has an excuse for not doing it" It's the missing piece that completes the puzzle. Simple idea, great concept, useful feature, but, as it turned out, far from straightforward to implement. The problem is the future. As Marty McFly discovered over the course of three movies, the future is uncertain and hard to predict - so when you are scheduling a restore to take place an hour, day, week or month after the backup, there are all kinds of questions that you wouldn't normally have to consider. Where will this backup live? Will it even exist at the time? Will it be split into multiple files? What will the file names be? Will it be encrypted? What files should it be restored to? SQL Backup needs to know what to expect at the time the restore job is actually run. Of course, a DBA will know the answer to all these questions, but to deliver the whole point of version 7, we wanted to make it easy for them to input that information into SQL Backup. We think we've done that. When you create your scheduled backup job, there is now an option to create a "reminder" to follow it up with a scheduled restore to verify the resulting backups. Actually, it's much more than a reminder, as it stores all the relevant data so you can click it and pre-populate the wizard with all the right settings to set up your verification restores. Simple. But, what do you think? We'd love you to try it. Post by Brian Harris

    Read the article

  • Concurrency Utilities for Java EE Early Draft (JSR 236)

    - by arungupta
    Concurrency Utilities for Java EE is being worked as JSR 236 and has released an Early Draft. It provides concurrency capabilities to Java EE application components without compromising container integrity. Simple (common) and advanced concurrency patterns are easily supported without sacrificing usability. Using Java SE concurrency utilities such as java.util.concurrent API, java.lang.Thread and java.util.Timer in a Java EE application component such as EJB or Servlet are problematic since the container and server have no knowledge of these resources. JSR 236 enables concurrency largely by extending the Concurrency Utilities API developed under JSR-166. This also allows a consistency between Java SE and Java EE concurrency programming model. There are four main programming interfaces available: ManagedExecutorService ManagedScheduledExecutorService ContextService ManagedThreadFactory ManagedExecutorService is a managed version of java.util.concurrent.ExecutorService. The implementations of this interface are provided by the container and accessible using JNDI reference: <resource-env-ref>  <resource-env-ref-name>    concurrent/BatchExecutor  </resource-env-ref-name>  <resource-env-ref-type>    javax.enterprise.concurrent.ManagedExecutorService  </resource-env-ref-type><resource-env-ref> and available as: @Resource(name="concurrent/BatchExecutor")ManagedExecutorService executor; Its recommended to bind the JNDI references in the java:comp/env/concurrent subcontext. The asynchronous tasks that need to be executed need to implement java.lang.Runnable or java.util.concurrent.Callable interface as: public class MyTask implements Runnable { public void run() { // business logic goes here }} OR public class MyTask2 implements Callable<Date> {  public Date call() { // business logic goes here   }} The task is then submitted to the executor using one of the submit method that return a Future instance. The Future represents the result of the task and can also be used to check if the task is complete or wait for its completion. Future<String> future = executor.submit(new MyTask(), String.class);. . .String result = future.get(); Another example to submit tasks is: class MyTask implements Callback<Long> { . . . }class MyTask2 implements Callback<Date> { . . . }ArrayList<Callable> tasks = new ArrayList<();tasks.add(new MyTask());tasks.add(new MyTask2());List<Future<Object>> result = executor.invokeAll(tasks); The ManagedExecutorService may be configured for different properties such as: Hung Task Threshold: Time in milliseconds that a task can execute before it is considered hung Pool Info Core Size: Number of threads to keep alive Maximum Size: Maximum number of threads allowed in the pool Keep Alive: Time to allow threads to remain idle when # of threads > Core Size Work Queue Capacity: # of tasks that can be stored in inbound buffer Thread Use: Application intend to run short vs long-running tasks, accordingly pooled or daemon threads are picked ManagedScheduledExecutorService adds delay and periodic task running capabilities to ManagedExecutorService. The implementations of this interface are provided by the container and accessible using JNDI reference: <resource-env-ref>  <resource-env-ref-name>    concurrent/BatchExecutor  </resource-env-ref-name>  <resource-env-ref-type>    javax.enterprise.concurrent.ManagedExecutorService  </resource-env-ref-type><resource-env-ref> and available as: @Resource(name="concurrent/timedExecutor")ManagedExecutorService executor; And then the tasks are submitted using submit, invokeXXX or scheduleXXX methods. ScheduledFuture<?> future = executor.schedule(new MyTask(), 5, TimeUnit.SECONDS); This will create and execute a one-shot action that becomes enabled after 5 seconds of delay. More control is possible using one of the newly added methods: MyTaskListener implements ManagedTaskListener {  public void taskStarting(...) { . . . }  public void taskSubmitted(...) { . . . }  public void taskDone(...) { . . . }  public void taskAborted(...) { . . . } }ScheduledFuture<?> future = executor.schedule(new MyTask(), 5, TimeUnit.SECONDS, new MyTaskListener()); Here, ManagedTaskListener is used to monitor the state of a task's future. ManagedThreadFactory provides a method for creating threads for execution in a managed environment. A simple usage is: @Resource(name="concurrent/myThreadFactory")ManagedThreadFactory factory;. . .Thread thread = factory.newThread(new Runnable() { . . . }); concurrent/myThreadFactory is a JNDI resource. There is lot of interesting content in the Early Draft, download it, and read yourself. The implementation will be made available soon and also be integrated in GlassFish 4 as well. Some references for further exploring ... Javadoc Early Draft Specification concurrency-ee-spec.java.net [email protected]

    Read the article

  • What's New in Oracle VM VirtualBox 4.2?

    - by Fat Bloke
    A year is a long time in the IT industry. Since the last VirtualBox feature release, which was a little over a year ago, we've seen: new releases of cool new operating systems, such as Windows 8, ChromeOS, and Mountain Lion; we've seen a myriad of new Linux releases from big Enterprise class distributions like Oracle 6.3, to accessible desktop distros like Ubuntu 12.04 and Fedora 17; and we've also seen the spec of a typical PC or laptop double in power. All of these events have influenced our new VirtualBox version which we're releasing today. Here's how... Powerful hosts  One of the trends we've seen is that as the average host platform becomes more powerful, our users are consistently running more and more vm's. Some of our users have large libraries of vm's of various vintages, whilst others have groups of vm's that are run together as an assembly of the various tiers in a multi-tiered software solution, for example, a database tier, middleware tier, and front-ends.  So we're pleased to unveil a more powerful VirtualBox Manager to address the needs of these users: VM Groups Groups allow you to organize your VM library in a sensible way, e.g.  by platform type, by project, by version, by whatever. To create groups you can drag one VM onto another or select one or more VM's and choose Machine...Group from the menu bar. You can expand and collapse groups to save screen real estate, and you can Enter and Leave a group (think iPad navigation here) by using the right and left arrow keys when groups are selected. But groups are more than passive folders, because you can now also perform operations on groups, rather than all the individual VMs. So if you have a multi-tiered solution you can start the whole stack up with just one click. Autostart Many VirtualBox users run dedicated services in their VMs, for example, running a Wiki. With these types of VM workloads, you really want the VM start up when the host machine boots up. So with 4.2 we've introduced a cross-platform Auto-start mechanism to allow you to treat VMs as host services. Headless VM Launching With VM's such as web servers, wikis, and other types of server-class workloads, the Console of the VM is pretty much redundant. For some time now VirtualBox has offered a separate launch mechanism for these VM's, namely the command-line interface commands VBoxHeadless or VBoxManage startvm ... --type headless commands. But with 4.2 we also allow you launch headless VMs from the Manager. Simply hold down Shift when launching the VM from the Manager.  It's that easy. But how do you stop a headless VM? Well, with 4.2 we allow you to Close the VM from the Manager. (BTW best to use the ACPI Shutdown method which allows the guest VM to close down gracefully.) Easy VM Creation For our expert users, the  New VM Wizard was a little tiresome, so now there's a faster 2-click VM creation mode. Just Hide the description when creating a new VM. Powerful VMs  As the hosts have become more powerful, so are the guests that are running inside them. Here are some of the 4.2 features to accommodate them: Virtual Network Interface Cards  With 4.2, it's now possible to create VMs with up to 36 NICs, when using the ICH9 chipset emulation. But with great power comes great responsibility (didn't Obi-Wan say something similar?), and so we have also introduced bandwidth limiting to prevent a rogue VM stealing the whole pipe. VLAN tagging Some of our users leverage VLANs extensively so we've enhanced the E1000 NICs to support this.  Processor Performance If you are running a CPU which supports Nested Paging (aka EPT in the Intel world) such as most of the Core i5 and i7 CPUs, or are running an AMD Bulldozer or later, you should see some performance improvements from our work with these processors. And while we're talking Processors, we've added support for some of the more modern VIA CPUs too. Powerful Automation Because VirtualBox runs atop a fully blown operating system, it makes sense to leverage the capabilities of the host to run scripts that can drive the guest VMs. Guest Automation was introduced in a prior release but with 4.2 we've revamped the APIs to allow a richer and more powerful set of operations to be executed by the guest. Check out the IGuest APIs in the VirtualBox Programming Guide and Reference (SDK). Powerful Platforms  All the hardcore engineering that has gone into 4.2 has been done for a purpose and that is to deliver a fast and powerful engine that can run almost any x86 OS because of the integrity of the virtualization. So we're pleased to add support for these platforms: Mac OS X "Mountain Lion"  Windows 8 Windows Server 2012 Ubuntu 12.04 (“Precise Pangolin”) Fedora 17 Oracle Linux 6.3  Here's the proof: We don't have time to go into the myriad of smaller improvements such as support for burning audio CDs from a guest, bi-directional clipboard control,  drag-and-drop of files into Linux guests, etc. so we'll leave that as an exercise for the user as soon as you've downloaded from the Oracle or community site and taken a peek at the User Guide. So all in all, a pretty solid release, one that we hope you'll enjoy discovering. - FB 

    Read the article

  • MySQL Config File for Large System

    - by Jonathon
    We are running MySQL on a Windows 2003 Server Enterpise Edition box. MySQL is about the only program running on the box. We have approx. 8 slaves replicated to it, but my understanding is that having multiple slaves connecting to the same master does not significantly slow down performance, if at all. The master server has 16G RAM, 10 Terabyte drives in RAID 10, and four dual-core processors. From what I have seen from other sites, we have a really robust machine as our master db server. We just upgraded from a machine with only 4G RAM, but with similar hard drives, RAID, etc. It also ran Apache on it, so it was our db server and our application server. It was getting a little slow, so we split the db server onto this new machine and kept the application server on the first machine. We also distributed the application load amongst a few of our other slave servers, which also run the application. The problem is the new db server has mysqld.exe consuming 95-100% of CPU almost all the time and is really causing the app to run slowly. I know we have several queries and table structures that could be better optimized, but since they worked okay on the older, smaller server, I assume that our my.ini (MySQL config) file is not properly configured. Most of what I see on the net is for setting config files on small machines, so can anyone help me get the my.ini file correct for a large dedicated machine like ours? I just don't see how mysqld could get so bogged down! FYI: We have about 100 queries per second. We only use MyISAM tables, so skip-innodb is set in the ini file. And yes, I know it is reading the ini file correctly because I can change some settings (like the server-id and it will kill the server at startup). Here is the my.ini file: #MySQL Server Instance Configuration File # ---------------------------------------------------------------------- # Generated by the MySQL Server Instance Configuration Wizard # # # Installation Instructions # ---------------------------------------------------------------------- # # On Linux you can copy this file to /etc/my.cnf to set global options, # mysql-data-dir/my.cnf to set server-specific options # (@localstatedir@ for this installation) or to # ~/.my.cnf to set user-specific options. # # On Windows you should keep this file in the installation directory # of your server (e.g. C:\Program Files\MySQL\MySQL Server X.Y). To # make sure the server reads the config file use the startup option # "--defaults-file". # # To run run the server from the command line, execute this in a # command line shell, e.g. # mysqld --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" # # To install the server as a Windows service manually, execute this in a # command line shell, e.g. # mysqld --install MySQLXY --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" # # And then execute this in a command line shell to start the server, e.g. # net start MySQLXY # # # Guildlines for editing this file # ---------------------------------------------------------------------- # # In this file, you can use all long options that the program supports. # If you want to know the options a program supports, start the program # with the "--help" option. # # More detailed information about the individual options can also be # found in the manual. # # # CLIENT SECTION # ---------------------------------------------------------------------- # # The following options will be read by MySQL client applications. # Note that only client applications shipped by MySQL are guaranteed # to read this section. If you want your own MySQL client program to # honor these values, you need to specify it as an option during the # MySQL client library initialization. # [client] port=3306 [mysql] default-character-set=latin1 # SERVER SECTION # ---------------------------------------------------------------------- # # The following options will be read by the MySQL Server. Make sure that # you have installed the server correctly (see above) so it reads this # file. # [mysqld] # The TCP/IP Port the MySQL Server will listen on port=3306 #Path to installation directory. All paths are usually resolved relative to this. basedir="D:/MySQL/" #Path to the database root datadir="D:/MySQL/data" # The default character set that will be used when a new schema or table is # created and no character set is defined default-character-set=latin1 # The default storage engine that will be used when create new tables when default-storage-engine=MYISAM # Set the SQL mode to strict #sql-mode="STRICT_TRANS_TABLES,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION" # we changed this because there are a couple of queries that can get blocked otherwise sql-mode="" #performance configs skip-locking max_allowed_packet = 1M table_open_cache = 512 # The maximum amount of concurrent sessions the MySQL server will # allow. One of these connections will be reserved for a user with # SUPER privileges to allow the administrator to login even if the # connection limit has been reached. max_connections=1510 # Query cache is used to cache SELECT results and later return them # without actual executing the same query once again. Having the query # cache enabled may result in significant speed improvements, if your # have a lot of identical queries and rarely changing tables. See the # "Qcache_lowmem_prunes" status variable to check if the current value # is high enough for your load. # Note: In case your tables change very often or if your queries are # textually different every time, the query cache may result in a # slowdown instead of a performance improvement. query_cache_size=168M # The number of open tables for all threads. Increasing this value # increases the number of file descriptors that mysqld requires. # Therefore you have to make sure to set the amount of open files # allowed to at least 4096 in the variable "open-files-limit" in # section [mysqld_safe] table_cache=3020 # Maximum size for internal (in-memory) temporary tables. If a table # grows larger than this value, it is automatically converted to disk # based table This limitation is for a single table. There can be many # of them. tmp_table_size=30M # How many threads we should keep in a cache for reuse. When a client # disconnects, the client's threads are put in the cache if there aren't # more than thread_cache_size threads from before. This greatly reduces # the amount of thread creations needed if you have a lot of new # connections. (Normally this doesn't give a notable performance # improvement if you have a good thread implementation.) thread_cache_size=64 #*** MyISAM Specific options # The maximum size of the temporary file MySQL is allowed to use while # recreating the index (during REPAIR, ALTER TABLE or LOAD DATA INFILE. # If the file-size would be bigger than this, the index will be created # through the key cache (which is slower). myisam_max_sort_file_size=100G # If the temporary file used for fast index creation would be bigger # than using the key cache by the amount specified here, then prefer the # key cache method. This is mainly used to force long character keys in # large tables to use the slower key cache method to create the index. myisam_sort_buffer_size=64M # Size of the Key Buffer, used to cache index blocks for MyISAM tables. # Do not set it larger than 30% of your available memory, as some memory # is also required by the OS to cache rows. Even if you're not using # MyISAM tables, you should still set it to 8-64M as it will also be # used for internal temporary disk tables. key_buffer_size=3072M # Size of the buffer used for doing full table scans of MyISAM tables. # Allocated per thread, if a full scan is needed. read_buffer_size=2M read_rnd_buffer_size=8M # This buffer is allocated when MySQL needs to rebuild the index in # REPAIR, OPTIMZE, ALTER table statements as well as in LOAD DATA INFILE # into an empty table. It is allocated per thread so be careful with # large settings. sort_buffer_size=2M #*** INNODB Specific options *** innodb_data_home_dir="D:/MySQL InnoDB Datafiles/" # Use this option if you have a MySQL server with InnoDB support enabled # but you do not plan to use it. This will save memory and disk space # and speed up some things. skip-innodb # Additional memory pool that is used by InnoDB to store metadata # information. If InnoDB requires more memory for this purpose it will # start to allocate it from the OS. As this is fast enough on most # recent operating systems, you normally do not need to change this # value. SHOW INNODB STATUS will display the current amount used. innodb_additional_mem_pool_size=11M # If set to 1, InnoDB will flush (fsync) the transaction logs to the # disk at each commit, which offers full ACID behavior. If you are # willing to compromise this safety, and you are running small # transactions, you may set this to 0 or 2 to reduce disk I/O to the # logs. Value 0 means that the log is only written to the log file and # the log file flushed to disk approximately once per second. Value 2 # means the log is written to the log file at each commit, but the log # file is only flushed to disk approximately once per second. innodb_flush_log_at_trx_commit=1 # The size of the buffer InnoDB uses for buffering log data. As soon as # it is full, InnoDB will have to flush it to disk. As it is flushed # once per second anyway, it does not make sense to have it very large # (even with long transactions). innodb_log_buffer_size=6M # InnoDB, unlike MyISAM, uses a buffer pool to cache both indexes and # row data. The bigger you set this the less disk I/O is needed to # access data in tables. On a dedicated database server you may set this # parameter up to 80% of the machine physical memory size. Do not set it # too large, though, because competition of the physical memory may # cause paging in the operating system. Note that on 32bit systems you # might be limited to 2-3.5G of user level memory per process, so do not # set it too high. innodb_buffer_pool_size=500M # Size of each log file in a log group. You should set the combined size # of log files to about 25%-100% of your buffer pool size to avoid # unneeded buffer pool flush activity on log file overwrite. However, # note that a larger logfile size will increase the time needed for the # recovery process. innodb_log_file_size=100M # Number of threads allowed inside the InnoDB kernel. The optimal value # depends highly on the application, hardware as well as the OS # scheduler properties. A too high value may lead to thread thrashing. innodb_thread_concurrency=10 #replication settings (this is the master) log-bin=log server-id = 1 Thanks for all the help. It is greatly appreciated.

    Read the article

  • API server not function ["The connection has been reset"]

    - by Miguel Beltrán
    I'm having some troubles with one of my servers. I've done an application with two servers, one the frontend that grabs the data of server API (Ubuntu server). Well, yesterday had a lot of visits and the API server stop functioning but: -I can do stuff in MySQL by SSH. -The memory usage is ok. -The logs are ok. -The bandwitch usage is ok. -If i restart the server or Apache2, function by some time (3-4 minutes). And the most important i think if i tries to access to API (Is rest-style with http) it puts me the Firefox error "The connection has been reset". I'd tried: -Restart the server -Restart Apache2 -Restart MySQL -Viewed the logs of Apache2/MySQL I don't know too much about systems so i don't know what to do more.

    Read the article

  • fatal: git-http-push-failed (return code 22)

    - by Mariusz
    Hello, that's me again. After having problem with estabilishing connection to github.com now I have a problem with next step - pushing. I need to mention, that I am novice at GIT service, and this whole Distributed Subversion Checking Systems world.. I have done git init, then git add *.h and git add *.cpp, but currently git status does not print anything in "# On branch master" section? Previously It was correctly printing whole list of added files, now this list is gone. Nextly, I have executed: git remote add origin https://github.com/mgeeky/disasm.git and error has occured after: git push origin master Username: Password: error: Cannot access URL https://github.com/mgeeky/disasm.git/, return code 22 fatal: git-http-push failed What should I do now? I've tried: git push origin Username: Password: No refs in common and none specified; doing nothing. Perhaps you should specify a branch such as 'master'. Everything up-to-date But it seems to be okey.

    Read the article

  • "Host usb device connections disabled" in VMware???

    - by ZlateWay
    I installed Linux, Windows XP and Chrome OS in VMware Workstation 7 and in every OS the USB host doesn't work. When I start some of the Operative Systems this message shows up: "host usb device connections disabled" and under that : "The connection to the VMware USB Arbitration Service was unsuccessful. Please check the status of this service in the Microsoft Management Console." So what to do? What do I need to install to make the usb host work? BTW I use Windows 7 as a host OS. Thanks

    Read the article

  • What does 'Highest active time' for disk activity in Windows resource monitor mean?

    - by Nick R
    I know what the disk io, disk queue length and other measures are, but what does 'Highest active time' mean? Is it the amount of time it is busy handling requests, or something else? When it is high, does it mean the CPU is busy doing some IO work, or is it just indicating that the disk is busy handling requests? I'm trying to work out if 50% active time means that 50% of the time the disk is either seeking, reading or writing, rather than the kernel is spending 50% of it's time servicing IO requests. Edit Another quick data point here. If you look at the difference between an SSD and a physical disk, the SSD has significantly less activity, so I guess this really means the amount of time the operating system is waiting for the disk to respond and returning data.

    Read the article

  • Continual "The Windows Filtering Platform has blocked a connection" errors?

    - by Richard
    Our systems have been compromised by something recently which has lead us to carry out a more detailed look at what is happening on our workstations. I have noticed an issue where the Security log of this Windows 7 workstation is continually logging a security "Audit Failure" where the detail is that "The Windows Filtering Platform has blocked a connection". This is happening thousands of times a day and would appear to be our BT Business Broadband HGV 2700 ADSL router attempting to connect to Port 137 (NET Bios) on my workstation and being blocked. This has unfortunately had the effect of filling up the log files so much that anything which might have been of use which was logged over the weekend to help debug the intrusion has been "overwritten off the end" of the Security log. (I've since increased the log file size limits massively and turned on archiving). Does anyone know if this is standard behaviour of a BT ADSL router or whether this indicates that the router is compromised in some way or malfunctioning, or have any further suggestions as to how to diagnose this problem?

    Read the article

  • Windows Server 2003 - Handling hundreds of simultaneous downloads

    - by Paul Hinett
    At the moment I have a single server with 4 1TB hard disks, daily I haver over 150 MP3 music files uploaded (around 80mb each). At busy periods there is over 300 people streaming / downloading these mixes all at once, 75% of the activity is on the most recently uploaded stuff which is all on a single hard disk. My read speads on the hard disk are very low due to such high activity of 200+ reads all happening at the same time on a single hard disk (ran some tests with HDTach). What would be a logical solution to solve this, a couple of ideas I had are: Load balance with another server Install faster hard disks (what are best these days? SCSI / SATA) Spread the most accessed files over the 4 drives so it is sharing the load between all 4 disks, instead of all the most accessed (most recent) all on the most recently installed drive. Obviouslly load balance is the most expensive option, but would it dramatically help? Some help on this situation would be great!

    Read the article

  • Amazon EC2 spot instances - is there a catch ?

    - by gareth_bowles
    I needed to start a new EC2 instance today and decided to try out the new spot instances, where you can reduce your instance cost by bidding on the maximum per-hour price you're prepared to pay. Since today's spot price was only 3.5c / hour, compared with 8.5c / hour for an on-demand instance, I was wondering: if I just bid a really high price, say 10c / hour, can I effectively be sure of getting a much cheaper long-running instance than an on-demand instance (since the spot instances are only charged by the current spot price) ? I suppose it's theoretically possible for the spot price to go over the on-demand price, but as far as I can tell from the data on the AWS site, the spot price has always been well below that.

    Read the article

  • Amazon EC2 spot instances - is there a catch ?

    - by gareth_bowles
    I needed to start a new EC2 instance today and decided to try out the new spot instances, where you can reduce your instance cost by bidding on the maximum per-hour price you're prepared to pay. Since today's spot price was only 35c / hour, compared with 85c / hour for an on-demand instance, I was wondering: if I just bid a really high price, say $1 / hour, can I effectively be sure of getting a much cheaper long-running instance than an on-demand instance (since the spot instances are only charged by the current spot price) ? I suppose it's theoretically possible for the spot price to go over the on-demand price, but as far as I can tell from the data on the AWS site, the spot price has always been well below that.

    Read the article

  • Disable reverse PTR check in Zimbra and force accept from invalid domains

    - by ewwhite
    I've moved an older Sendmail/Dovecot system to a Zimbra community edition system. I need to be able to receive messages from certain standalone Linux hosts that may not have valid A records or proper reverse DNS entries established (e.g. AT&T is the ISP or systems sitting on a consumer-level ISP). Establishing the reverse DNS or setting a SMARTHOST is not an option. The error I get in zimbra.log is: zimbra postfix/smtp[2200]: DB83B231B53: to=<root@host_name.baddomain.com>, relay=none, delay=0.07, delays=0.06/0/0/0, dsn=5.4.4, status=bounced (Host or domain name not found. Name service error for name=host_name.baddomain.com type=A: Host not found How can I override this? Is this more of a Postfix issue or is it Zimbra? edit - The problem seems to be with an underscore in the hostname of the server. So it's a problem with root@host_name.baddomain.com. Again, how can I override this in Zimbra?

    Read the article

  • 4GB of RAM in MacOSX 10.5, only 3GB in MacOSX 10.6

    - by Albert
    Hi, I was using MacOSX 10.5 on my MacBook until today and I had 4GB of memory there. Now I have updated to MacOSX 10.6 and it only displays 3GB. Why is that? How can I fix it? Also, I am a bit wondering why most people (well, most of the Google hits explained the 3GB issue that way -- leaving out the fact that it has worked earlier) are saying that a 32bit system can under no circumstances access more than 3.2GB. Don't we have PAE nowadays in most systems? Thanks, Albert

    Read the article

  • SQL Cluster on Hyper V Failover Cluster

    - by Chris W
    We have a VM running SQL Server on a 6 node cluster of blades. The VM's data files are stored a SAN attached using a direct iSCSI connection. As this SQL server will be running a number of important databases we're debating whether we should be clustering the SQL Server or will the fact that the VM is running in the cluster itself sufficient to give us high availability. I'm used to running SQL clusters when dealing with physical servers but I'm a bit sketchy on what is best practice when all the servers are just VMs sat on Hyper V. If a blade running the VM fails I presume the VM will be started up on another load. I'm guessing the only benefit that adding a SQL cluster to the setup will give us it that the recovery time after a failure will be a little quicker? Are there any other benefits?

    Read the article

  • Copy all files and folders excluding subversion files and folders on OS X

    - by Michael Prescott
    I'm trying to copy all files and folders from one directory to another, but exclude certain files. Specifically, I want to exclude subversion files and folders. However, I'd like a general yet concise solution. I imagine I'll find the need to exclude several types of files in the near future. For example, I might want to exclude .svn, *.bak, and *.prj. Here is what I've put together so for, but it is not working for me. The first part, find works, but I'm doing something wrong with xargs and cp. I tried cp with and without the -R. Also, I'm using OS X and it appears to have a less featured version of xargs than linux systems. find ./sourcedirectory -not \( -name .svn -a -prune \) | xargs -IFILES cp -R FILES ./destinationdirectory

    Read the article

  • Excel crashes when opening Excel files from Internet Explorer

    - by Rob
    I have been running into some issues when opening Excel files from Internet Explorer, generally the first document or two will open fine but after that trying to open a file will cause Excel and Internet Explorer to crash to the desktop without any notifications being given. This doesn't happen for users who are running Excel 2007, but for users with Excel 2003 it may or may not happen to them. The files in question are Excel XML files and Internet Explorer 6 and Excel 2003 are being use. At this time it would not be possible to upgrade Internet Explorer, but it would be able to upgrade to Excel to version 2007 if that would resolve the issue. Overdue Update: We recently upgraded to Firefox at the office which has rendered this error a non-issue; however, it is still unresolved from the standpoint that we haven't been able to come up with an explanation to the issue. Since IE6 is still installed on the systems, a fix to the problem (or explanation of why it's happening) would be appreciated.

    Read the article

  • Vim: Use different ~/.vim/plugin/ directories for different versions of vim?

    - by Stefan Lasiewski
    Like many of you, my custom Vim configuration is stored in my ~/.vimrc, with the plugins, colors, etc. stored under ~/.vim/plugins, ~/.vim/colors, etc. I want to share a single Vim configuration among many servers. Some of these servers run Vim 7, some run the older Vim 6. Most Vim plugins are intended for Vim 7, but older versions still exist for those of us on older systems. See DirDiff for an example. If I am on a system which runs Vim 6, how can I configure Vim to only use Vim 6-compatible plugins? I was thinking about storing older plugins in a subdirectory like ~/.vim/plugins6/ and keep the Vim plugins in ~/.vim/plugins, but then how can I tell Vim6 to ignore ~/.vim/plugins and use ~/.vim/plugins6 instead?

    Read the article

  • Which part of the computer needs all the power from the PSU?

    - by Xeoncross
    A couple years ago I was building a new Core 2 Quad system and after reading all the reviews was convinced that I would need at least a 400 watt power supply unit (PSU). I bought a 500W Antec EarthWatts However, last year I bought a Kill-A-Watt power meter to test some things around our house and found that my PC was only using 80W of power while idle! (C2Q, 4GB RAM, SATA HD, & DVD burner) Well, here I am building another computer with a 65watt Core 2 CPU in it and I'm wondering if I can skimp out this time and get a 300watt or so unit since my usage doesn't seem to be what everyone claims it is. I'm sure that the people in the reviews who exhausted 500watt PSU weren't lying - so what is it that uses all that? The high-end dual SLI video cards? Lots of SATA drives? Overclocking?

    Read the article

  • Move smaller hard drive to partition on a larger hard drive

    - by bluejeansummer
    My parents bought a new hard drive for a laptop that I've owned for several years. It's much larger than the current one, so I plan on splitting it up to dual boot it with Ubuntu. I have no problem with partitioning a drive (I always keep a LiveCD handy), but my question is this: how can I go about moving the existing partition to the new drive? This is a laptop, so I can't simply plug the new drive into another slot. Also, even if I manage to move it, will Windows still work on the new drive in a larger partition? I've had this laptop for quite a while, and I've lost the recovery discs that came with it a long time ago. I also have a lot of software without CDs to reinstall them with. This makes not reinstalling Windows a high priority. In case it helps, both drives use 2.5" PATA, and I have a 1 TB external drive available if it's needed.

    Read the article

  • Change DPI of one user using W2K3 Remote Desktop / Terminal Server

    - by GvS
    In short: How do I increase the DPI of some (not all) of our customers connected to our RDP server? We are running a W2K3 Terminal Server that our clients connect to to run our application. One of our clients complains that all fonts / icons etc are too small. This user has a high DPI monitor. The DPI of the client OS (XP in this case) is not transferred to the server. To make things worse (or more interesting) the Display properties dialog disables the Advanced button that you can use to change the DPI on normal clients.

    Read the article

< Previous Page | 285 286 287 288 289 290 291 292 293 294 295 296  | Next Page >