Search Results

Search found 22065 results on 883 pages for 'performance testing'.

Page 451/883 | < Previous Page | 447 448 449 450 451 452 453 454 455 456 457 458  | Next Page >

  • A tour of the GlassFish 3.1.2 DCOM support

    - by alexismp
    While we've mentioned the DCOM support in GlassFish 3.1.2 several times before, you'll probably find Byron's DCOM blog entry to be useful if you're using Windows as a deployment platform for your GlassFish cluster. Byron discusses how DCOM is used to communicate with remote Windows nodes participating in a GlassFish cluster, what Java libraries were used to wrap around DCOM, what new asadmin commands were addd (in particular validate-dcom) as well as some tips to make this all work on your specific environment. In addition to this blog post, you should considering reading the official product documentation : • Considerations for Using DCOM for Centralized Administration • Setting Up DCOM and Testing the DCOM Set Up

    Read the article

  • How to manage two video cards on a laptop that runs Ubuntu 10.10?

    - by Marc-François Cochaux-Laberge
    I have a laptop with two video cards. One ATI and on integrated Intel. On Windows, I can choose which video card I want to use. For example, I use the Intel card for normal use and for gaming, I switch to my ATI card for better performance, but a shorter battery life. In Ubuntu 10.10, only the Intel driver is installed, the ATI driver for my card doesn't work at all and there's heat coming out of my computer all the time, like when I'm playing video games on Windows. I think both cards are active, but only the Intel one is usefull. How can I solve this by making sure Ubuntu is aware of the two video cards and by disabling my ATI. Or may be I am all wrong about this?

    Read the article

  • WebCenter Content shared folders for clustering

    - by Kyle Hatlestad
    When configuring a WebCenter Content (WCC) cluster, one of the things which makes it unique from some other WebLogic Server applications is its requirement for a shared file system.  This is actually not any different then 10g and previous versions of UCM when it ran directly on a JVM.  And while it is simple enough to say it needs a shared file system, there are some crucial details in how those directories are configured. And if they aren't followed, you may result in some unwanted behavior. This blog post will go into the details on how exactly the file systems should be split and what options are required. Beyond documents being stored on the file system and/or database and metadata being stored in the database along with other structured data, there is other information being read and written to on the file system.  Information such as user profile preferences, workflow item state information, metadata profiles, and other details are stored in files.  In addition, for certain processes within WCC, each of the nodes needs to know what the other nodes are doing so they don’t step on each other.  WCC keeps track of this through the use of lock files on the file system.  Because of this, each node of the WCC must have access to the same file system just as they have access to the same database. WCC uses its own locking mechanism using files, so it also needs to have access to those files without file attribute caching and without locking being done by the client (node).  If one of the nodes accesses a certain status file and it happens to be cached, that node might attempt to run a process which another node is already working on.  Or if a particular file is locked by one of the node clients, this could interfere with access by another node.  Unfortunately, when disabling file attribute caching on the file share, this can impact performance.  So it is important to only disable caching and locking on the particular folders which require it.  When configuring WebCenter Content after deploying the domain, it asks for 3 different directories: Content Server Instance Folder, Native File Repository Location, and Weblayout Folder.  And starting in PS5, it now asks for the User Profile Folder. Even if you plan on storing the content in the database, you still need to establish a Native File (Vault) and Weblayout directories.  These will be used for handling temporary files, cached files, and files used to deliver the UI. For these directories, the only folder which needs to have the file attribute caching and locking disabled is the ‘Content Server Instance Folder’.  So when establishing this share through NFS or a clustered file system, be sure to specify those options. For instance, if creating the share through NFS, use the ‘noac’ and ‘nolock’ options for the mount options. For the other directories, caching and locking should be enabled to provide best performance to those locations.   These directory path configurations are contained within the <domain dir>\ucm\cs\bin\intradoc.cfg file: #Server System PropertiesIDC_Id=UCM_server1 #Server Directory Variables IdcHomeDir=/u01/fmw/Oracle_ECM1/ucm/idc/ FmwDomainConfigDir=/u01/fmw/user_projects/domains/base_domain/config/fmwconfig/ AppServerJavaHome=/u01/jdk/jdk1.6.0_22/jre/ AppServerJavaUse64Bit=true IntradocDir=/mnt/share_no_cache/base_domain/ucm/cs/ VaultDir=/mnt/share_with_cache/ucm/cs/vault/ WeblayoutDir=/mnt/share_with_cache/ucm/cs/weblayout/ #Server Classpath variables #Additional Variables #NOTE: UserProfilesDir is only available in PS5 – 11.1.1.6.0UserProfilesDir=/mnt/share_with_cache/ucm/cs/data/users/profiles/ In addition to these folder configurations, it’s also recommended to move node-specific folders to local disk to avoid unnecessary traffic to the shared directory.  So on each node, go to <domain dir>\ucm\cs\bin\intradoc.cfg and add these additional configuration entries: VaultTempDir=<domain dir>/ucm/<cs>/vault/~temp/ TraceDirectory=<domain dir>/servers/<UCM_serverN>/logs/EventDirectory=<domain dir>/servers/<UCM_serverN>/logs/event/ And of course, don’t forget the cluster-specific configuration values to add as well.  These can be added through Admin Server -> General Configuration -> Additional Configuration Variables or directly in the <IntradocDir>/config/config.cfg file: ArchiverDoLocks=true DisableSharedCacheChecking=true ServiceAllowRetry=true    (use only with Oracle RAC Database)PublishLockTimeout=300000  (time can vary depending on publishing time and number of nodes) For additional information and details on clustering configuration, I highly recommend reviewing document [1209496.1] on the support site.  In addition, there is a great step-by-step guide on setting up a WebCenter Content cluster [1359930.1].

    Read the article

  • Oracle Developer Day: "Die Oracle Datenbank in der Praxis"

    - by Ulrike Schwinn (DBA Community)
    Im neuen Jahr finden wieder Oracle Developer Days in verschiedenen Städten statt!  In dieser speziell von der BU DB zusammengestellten Veranstaltung erfahren Sie viele Tipps und Tricks aus der Praxis und werden zu folgenden Themen auf den neuesten Stand gebracht: - Die Unterschiede der Editionen und ihre Geheimnisse - Umfangreiche Basisausstattung auch ohne Option - Performance und Skalierbarkeit in den einzelnen Editionen - Kosten- und Ressourceneinsparung leicht gemacht - Sicherheit in der Datenbank - Steigerung der Verfügbarkeit mit einfachen Mitteln - Der Umgang mit großen Datenmengen - Cloud Technologien in der Oracle Datenbank Ein Ausblick auf die Funktionen der für 2013 geplanten neuen Datenbank-Version rundet den Workshop ab. Termine, Agenda,Veranstaltungsorte und Anmeldung finden Sie hier. Melden Sie sich noch heute zur Veranstaltung an - die Teilnahme ist kostenlos!

    Read the article

  • A little primer on using TFS with a small team

    - by johndoucette
    The scenario; A small team of 3 developers mostly in maintenance mode with traditional ASP.net, classic ASP, .Net integration services and utilities with the company’s third party packages, and a bunch of java-based Coldfusion web applications all under Visual Source Safe (VSS). They are about to embark on a huge SharePoint 2010 new construction project and wanted to use subversion instead VSS. TFS was a foreign word and smelled of “high cost” and of an “over complicated process”. Since they had no preconditions about the old TFS versions (‘05 & ‘08), it was fun explaining how simple it was to install a TFS server and get the ball rolling, with or without all the heavy stuff one sometimes associates with such a huge and powerful application management lifecycle product. So, how does a small team begin using TFS? 1. Start by using source control and migrate current VSS source trees into TFS. You can take the latest version or migrate the entire version history. It’s up to you on whether you want a clean start or need quick access to all the version notes and history of the bits. 2. Since most shops are mainly in maintenance mode with existing applications, begin using bug workitems for everything. When you receive an issue/bug from your current tracking system, manually enter the workitem in TFS right through Visual Studio. You can automate the integration to the current tracking system later or replace it entirely. Believe me, this thing is powerful and can handle even the largest of help desks. 3. With new construction, begin work with requirements and task workitems and follow the traditional sprint-based development lifecycle. Obviously, some minor training will be needed, but don’t fear, this is very intuitive and MSDN has a ton of lesson based labs and videos. 4. For the java developers, use the new Team Explorer Everywhere 2010 plugin (recently known as Teamprise). There is a seamless interface in Eclipse, but also a good command-line utility for other environments such as Dreamweaver. 5. Wait to fully integrate the whole workitem/project management/testing process until your team is familiar with the integrated workitems for bugs and code. After a while, you will see the team wanting more transparency into the work they are all doing and naturally, everyone will want workitems to help them organize the chaos! 6. Management will be limited in the value of the reports until you have a fully blown implementation of project planning, construction, build, deployment and testing. However, there are some basic “bug rate” reports and current backlog listings that can provide good information. Some notable explanations of TFS; Work Item Tracking and Project Management - A workitem represents the unit of work within the system which enables tracking of all activities produced by a user, whether it is a developer, business user, project manager or tester. The properties of a workitem such as linked changesets (checked-in code), who updated the data and when, the states and reasons for change, are all transitioned to a data warehouse within TFS for reporting purposes. A workitem can be defines as a "bug", "requirement", test case", or a "change request". They drive the work effort by the individual assigned to it and also provide a key role in defining what needs to be done. Workitems are the things the team needs to do to accomplish a goal. Test Case Management - Starting with a workitem known as a "test case", a tester (or developer) can now author and manage test cases within a formal test plan subsystem. Although TFS supports the test case workitem type, there is a new product known as the VS Test Professional 2010 which allows a tester to facilitate manual tests including fast forwarding steps in the process to arrive at the assertion point quickly. This repeatable process provides quick regression tests and can be conducted by the business user to ensure completeness during UAT. In addition, developers no longer can provide a response to a bug with the line "cannot reproduce". With every test run, attachments including the recorded session, captured environment configurations and settings, screen shots, intellitrace (debugging history), and in some cases if the lab manager is being used, a snapshot of the tested environment is available. Version Control - A modern system allowing shared check-in/check-out, excellent merge conflict resolution, Shelvesets (personal check-ins), branching/merging visualization, public workspaces, gated check-ins, security hierarchy capabilities, and changeset/workitem tracking. Knowing what was done with the code by any developer has become much easier to picture and resolve issues. Team Build - Automate the compilation process whether you need it to be whenever a developer checks-in code, periodically such as nightly builds for testers in the morning, or manual builds to be deployed into production. Each build can run through pre-determined tests, perform code analysis to see if the developer conforms to the team standards, and reject the build if either fails. Project Portal & Reporting - Provide management with a dashboard with insight into the project(s). "Where are we" in each step of the way including past iterations and the current burndown rate. Enabling this feature is easy as it seamlessly interfaces with existing SharePoint implementations.

    Read the article

  • Limiting Audit Exposure and Managing Risk – Q&A and Follow-Up Conversation

    - by Tanu Sood
    Thanks to all who attended the live ISACA webcast on Limiting Audit Exposure and Managing Risk with Metrics-Driven Identity Analytics. We were really fortunate to have Don Sparks from ISACA moderate the webcast featuring Stuart Lincoln, Vice President, IT P&L Client Services, BNP Paribas, North America and Neil Gandhi, Principal Product Manager, Oracle Identity Analytics. Stuart’s insights given the team’s role in providing IT for P&L Client Services and his tremendous experience in identity management and establishing sustainable compliance programs were true value-add at yesterday’s webcast. And if you are a healthcare organization looking to solve your compliance and security challenges, we recommend you join us for a live webcast on Tuesday, November 29 at 10 am PT. The webcast will feature experts from Kaiser Permanente, PricewaterhouseCoopers and Oracle and the focus of the discussion will be around the compliance challenges a healthcare organization faces and best practices for tackling those. Here are the details: Healthcare IT News Webcast: Managing Risk and Enforcing Compliance in Healthcare with Identity Analytics Tuesday, November 29, 201110:00 a.m. PT / 1:00 p.m. ET Register Today The ISACA webcast replay is now available on-demand and the slides are also available for download. Since we didn’t have time to address all the questions we received during the live Q&A portion of the webcast, we have captured responses to the remaining questions here. Please continue to provide us your feedback and insights from your experience in deploying identity compliance solutions. Q. Can you please clarify the mechanism utilized to populate the Identity Warehouse from each individual application's access management function / files? A. Oracle Identity Analytics (OIA) supports direct imports from applications. Data collection is based on Extract, Transform and Load (ETL) that eliminates the need to write connectors to different applications. Oracle Identity Analytics’ import engine supports complex entitlement feeds saved as either text files or XML. The imports can be scheduled on a periodic basis or triggered as needed. If the applications are synchronized with a user provisioning solution like Oracle Identity Manager, Oracle Identity Analytics has a seamless integration to pull in data from Oracle Identity Manager. Q.  Can you provide a short summary of the new features in your latest release of Oracle Identity Analytics? A. Oracle recently announced availability of enhanced Oracle Identity Analytics. This release focused on easing the certification process by offering risk analytics driven certification, advanced certification screens, business centric views and significant improvement in performance including 3X faster data imports, 3X faster certification campaign generation and advanced auto-certification features, that  will allow organizations to improve user productivity by up to 80%. Closed-loop risk feedback and IT policy monitoring with Oracle Identity Manager, a leading user provisioning solution, allows for more accurate certification reviews. And, OIA's improved performance enables customers to scale compliance initiatives supporting millions of user entitlements across thousands of applications, whether on premise or in the cloud, without compromising speed or integrity. Q. Will ISACA grant a CPE credit for attending this ISACA-sponsored webinar today? A. From ISACA: Hello and thank you for your interest in the 2011 ISACA Webinar Program!  Unfortunately, there are no CPEs offered for this program, archived or live.  We will be looking into the feasibility of offering them in the future.  Q. Would you be able to use this to help manage licenses for software? That is to say - could it track software that is not used by a user, thus eliminating the software license? A. OIA’s integration with Oracle Identity Manager, a leading user provisioning solution, allows organizations to detect ghost accounts or unused accounts via account reconciliation. Based on company’s policies, this could trigger an automated workflow for account deletion or asking for further investigation. Closed-loop feedback between the two solutions would then allow visibility into the complete audit trail of when the account was detected, the action taken, by whom, when and the current status. Q. We have quarterly attestations and .xls mechanisms are not working. Once the identity data is correlated in Identity Analytics, do you then automate access certification? A. OIA’s identity warehouse analyzes and correlates identity data across various resources that allows OIA to determine a user’s risk profile, who the access review request should go to, along with all the relevant access details of the user. The access certification manager gets notification on what to review, when and the relevant data is presented in a business friendly screen. Based on the result of the access certification process, actions are triggered and results recorded and archived. Access review managers have visual risk indicators that also allow them to prioritize access certification tasks and efforts. Q. How does Oracle Identity Analytics work with Cloud Security? A. For enterprises looking to build their own cloud(s), Oracle offers a set of security services that cloud developers can leverage including Oracle Identity Analytics.  For enterprises looking to manage their compliance requirements but without hosting those in-house and instead having a hosting provider offer managed Identity Management services to the organizations, Oracle Identity Analytics can be leveraged much the same way as you’d in an on-premise (within the enterprise) environment. In fact, organizations today are leveraging Oracle Identity Analytics to manage identity compliance in both these ways. Q. Would you recommend this as a cost effective solution for a smaller organization with @ 2,500 users? A. The key return-on-investment (ROI) on Oracle Identity Analytics is derived from automating compliance processes thereby eliminating administrative overhead, minimizing errors, maintaining cost- and time-effective sustainable compliance processes and minimizing audit exposures and penalties.  Of course, there are other tangible benefits that are derived from an Oracle Identity Analytics implementation as outlined in the webcast. For a quantitative analysis of your requirements and potential ROI calculation, we recommend you refer to the Forrester Study on Total Economic Impact of Oracle Identity Analytics. For an in-person discussion, please email Richard Caldwell.

    Read the article

  • Oracle Expert Live Virtual Seminars - Learn the tricks that only the expert know

    - by rituchhibber
    Oracle University Expert Seminars are exclusive events delivered by top Oracle experts with years of experience in working with Oracle products.         Introduction into ADF & BPM with Markus Grünewald - 11-12 December 2012 ADF/WebCenter 11g Development in Depth with Andrejus Baranovskis - 13-14 December 2012 Beating the Optimizer with Jonathan Lewis - Online - 17 January 2013 RAC Performance Tuning On-Line with Arup Nanda - 25 January 2013 Mastering Oracle Parallel Execution with Randolf Geist - 30 January 2013 Minimize Downtime with Rolling Upgrade using Data Guard with Uwe Hesse - 8 February 2013 For a full list of Oracle Expert Seminars near you or on line click here. Remember that your OPN discount is applied to the standard prices shown on the website.For more information, assistance in booking and to request new dates, contact your local Oracle University Service Desk.

    Read the article

  • Is it bad to have an "Obsessive Refactoring Disorder"?

    - by Rachel
    I was reading this question and realized that could almost be me. I am fairly OCD about refactoring someone else's code when I see that I can improve it. For example, if the code contains duplicate methods to do the same thing with nothing more than a single parameter changing, I feel I have to remove all the copy/paste methods and replace it with one generic one. Is this bad? Should I try and stop? I try not to refactor unless I can actually make improvements to the code performance or readability, or if the person who did the code isn't following our standard naming conventions (I hate expecting a variable to be local because of the naming standard, only to discover it is a global variable which has been incorrectly named)

    Read the article

  • DotNetNuke 5.3.1 Released

    I am happy to announce that the DotNetNuke 5.3.1 release is now available for download. This release was focused on fixing 3 significant issues with the 5.3.0 release which caused us to remove the release from CodePlex and our DotNetNuke Support Network. It is never easy to admit that significant issues slipped through testing and made it into a release package forcing you to take drastic actions. The only thing we can do is to re-evaluate our processes and continue to find areas for improvement...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Arrays for a heightmap tile-based map

    - by JPiolho
    I'm making a game that uses a map which have tiles, corners and borders. Here's a graphical representation: I've managed to store tiles and corners in memory but I'm having troubles to get borders structured. For the tiles, I have a [Map Width * Map Height] sized array. For corners I have [(Map Width + 1) * (Map Height + 1)] sized array. I've already made up the math needed to access corners from a tile, but I can't figure out how to store and access the borders from a single array. Tiles store the type (and other game logic variables) and via the array index I can get the X, Y. Via this tile position it is possible to get the array index of the corners (which store the Z index). The borders will store a game object and accessing corners from only border info would be also required. If someone even has a better way to store these for better memory and performance I would gladly accept that. EDIT: Using in C# and Javascript.

    Read the article

  • How to be a responsible early adopter?

    - by Gaurav
    A disconcertingly high number of new technologies/paradigms are overhyped (as is evident from replies to this question). Combine this with the fact that being early adopter is inherently risky. This makes evaluation of technology before adoption critical. So how exactly early adopters among you go about it. I can think of following general criteria. For early adoption technology must address previously unaddressed issue and/or For acceptance beyond early adoption technology should address performance One indicator that something is not right is when there is too much of jargon for technology which is either irrelevant or there is no evidence to back it up What is your opinion. Update: 4. BTW, the biggest reason to fear technology is when it is imposed by the tech-unaware management based entirely on number of buzzwords.

    Read the article

  • Soda Cans Exploding Under the Stress of High Voltage [Video]

    - by Jason Fitzpatrick
    In an effort to start your Monday off in true Mad Scientist style, we bring you soda cans being decimated by thousands of volts in a “Thumper”. What is a thumper, you ask? During office hours, it’s a high-voltage testing unit most often used to stress test electric cables. In the off hours, however, the electrical engineering geeks over at The Geek Group like to shove anywhere from a few hundred to thousands of volts through unsuspecting objects to see what happens. In this installation they’re shooting high voltage through a variety of soft drink cans with an end result that sounds and looks like a cannon loaded with Mountain Dew. [via Hacked Gadgets] HTG Explains: What Is RSS and How Can I Benefit From Using It? HTG Explains: Why You Only Have to Wipe a Disk Once to Erase It HTG Explains: Learn How Websites Are Tracking You Online

    Read the article

  • Thoughts on Development using Virtual Machines

    - by J_A_X
    I'll be working as a development lead for a startup and I've suggested that we use VMs for development. I'm not talking about each developer having a desktop with VMs for testing/development, I mean having a server rack where all VMs are managed and have the developers work from a microPC (ChromeOS anyone?) locally, or even remotely from their home computer. To me, the benefits are the fact that it's extremely scalable, cheaper in the long run, easier to manage and that we utilize the hardware its maximum potential. As for cons, I can't think of any particular showstoppers other than we'll need someone to setup/maintain said setup. I was hoping that some of you might of had a similar setup at your place of employment and be able to weight in with your opinions. Thanks.

    Read the article

  • SQL SERVER 2012 Editions – Highlights of The Cloud-Ready Information Platform

    - by pinaldave
    Microsoft has just announced SQL Server 2012 Editions information on official SQL Server 2012 site. SQL Server 2012 will be available in three main editions: Enterprise Business Intelligence Standard The other editions are Web, Developer and Express. Here is the salient features of each of the edition: Enterprise Advanced high availability with AlwaysOn High performance data warehousing with ColumnStore Maximum virtualization (with Software Assurance) Inclusive of Business Intelligence edition’s capabilities Business Intelligence Rapid data discovery with Power View Corporate and scalable reporting and analytics Data Quality Services and Master Data Services Inclusive of the Standard edition’s capabilities Standard Standard continues to offer basic database, reporting and analytics capabilities There is comparison chart of various other aspect of the above editions. Please refer here. Additionally SQL Server 2012 licensing is also explained here. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Business Intelligence, Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, SQLServer, T SQL, Technology

    Read the article

  • ArchBeat Link-o-Rama for 2012-04-04

    - by Bob Rhubart
    Is This How the Execs React to Your Recommendations? blogs.oracle.com "Well then, do your homework next time!" advises Rick Ramsey, and offers a list of Oracle Solaris 11 resources that just might make your next encounter a little less humiliating. WebLogic Server Performance and Tuning: Part I - Tuning JVM | Gokhan Gungor blogs.oracle.com A detailed how-to post from Gokhan Gungor. How to deal with transport level security policy with OSB | Jian Liang blogs.oracle.com Jian Liang shares "a use case for Oracle Service Bus (OSB) 11gPS4 to consume a Web Service which is secured by HTTP transport level security policy." Thought for the Day "Simple things should be simple and complex things should be possible." — Alan Kay

    Read the article

  • Hear about Oracle Supply Chain at Pella, Apr 27-29 '10

    - by [email protected]
    Oracle Customer Showcase - Apr 27-29'10 Featuring Pella Corp. Delivering Greater Customer Value "Discovering the Lean Value Chain" Pella is once again hosting Oracle customers at a mega-reference event in Pella, Iowa, on April 27-29. The agenda features a cross-stack set of topics and issues, including strategies for delivering customer value, improving the customer experience, Value Chain Planning / Manufacturing / Enterprise Performance Management, and Lean practices. Several executives will keynote, including Pella CIO Steve Printz. The event includes a demo grounds, round-table discussion groups, plant tours, and networking opportunities. !  

    Read the article

  • Sending email notifications to users

    - by Web Girl
    What is the preferable way to send email notifications to users? I can do it both ways but what is better? have some c# code that calls stored procedure in the database. Stored procedure based on some logic pulls all the emails data and sends email using database mail or c# code calls stored procedure, gets all the nesessary data back and sends email itself using smtp server etc. I just wonder what is the preferable way in the sense of performance etc... C# code is a library that would be a part of the web application. So it's where it's better to put the load, on the application server or the database server? System will not be crazy busy, it's not like Amazon or something. But still it would be nice to create something that makes sense.

    Read the article

  • Google I/O 2010 - Tips and tricks for Google Earth API and KML

    Google I/O 2010 - Tips and tricks for Google Earth API and KML Google I/O 2010 - Mapping in 3D: Tips and tricks for Google Earth API and KML Geo 201 Josh Livni, Mano Marks Google Earth and the Earth API can handle a tremendous amount of data. But you always have more. We will talk about integrating large datasets efficiently, coding for optimal performance, and taking advantage of advanced features in KML and the Earth API. For all I/O 2010 sessions, please go to code.google.com From: GoogleDevelopers Views: 14 0 ratings Time: 01:01:18 More in Science & Technology

    Read the article

  • Apache on Mac Mavericks issue [migrated]

    - by Michael
    Trying to run Apache so that I can create a testing server on my mac.When I start apache it starts, but it doesn't run (no connection to local host. Ill upload the unix,you'll see that after starting there is no processes, and I did a check to show you what was running on my port 80... I don't entirely know that means. Michaels-MacBook-Pro-3:~ michaelramos$ sudo apachectl start Michaels-MacBook-Pro-3:~ michaelramos$ ps aux | grep httpd michaelramos 348 0.0 0.0 2442000 624 s000 S+ 8:51AM 0:00.00 grep httpd Michaels-MacBook-Pro-3:~ michaelramos$ sudo apachectl start org.apache.httpd: Already loaded Michaels-MacBook-Pro-3:~ michaelramos$ sudo lsof -i ':80' COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME ocspd 96 root 18u IPv4 0x8402f926599c58df 0t0 TCP dhcp-92-67.radford.edu:49267->108.162.232.196:http (ESTABLISHED) ocspd 96 root 20u IPv4 0x8402f926599c58df 0t0 TCP dhcp-92-67.radford.edu:49267->108.162.232.196:http (ESTABLISHED) ocspd 96 root 21u IPv4 0x8402f926599c50f7 0t0 TCP dhcp-92-67.radford.edu:49268->108.162.232.206:http (ESTABLISHED) ocspd 96 root 23u IPv4 0x8402f926599c50f7 0t0 TCP dhcp-92-67.radford.edu:49268->108.162.232.206:http (ESTABLISHED)

    Read the article

  • Single database, multiple system dependency

    - by davenewza
    Consider an environment where we have a single, core database, with many separate systems using this one database. This leads to all of these systems have a common dependency, which ultimately introduces coupling between them. This means that we cannot always evolve systems independently of each other. Structural changes to the database (even if only intended for one, particular system), requires a full sweep test of ALL systems, and may require that other systems be 'patched' and subsequently released. This is especially tricky when you want to have separate teams working on different projects. What is a good 'pattern' to help in avoiding such coupling? I would imagine that a database should be exclusively depended on by one system. If other systems require data for whatever reason, they should request such from an API service of some kind. A drawback of this approach which comes to mind is performance: routing data between high-throughput systems through service calls is much slower than through a database connection.

    Read the article

  • D, Vala or Go for game development [on hold]

    - by Sheosi
    I'm looking forward to choose some compiled language for my 3D engine. The engine it's written in C++, however I would like to help coders by using a language which is good for games. I came with these three: Vala, D and Go. The engine is being made to write as less code as posible, also the "main" language it's going to be Lua so any of these will be the "advanced" one (mainly things which could affect performance or . Because of all this and the fact that I heard that Go is good for small projects I thought it would be a pretty good option, however it does not seems to be made (at least originally) for games and also some say I can have trouble with garbage collection in games. So what do you think? Do you have any experience with any of these three in games? How was it?

    Read the article

  • Where I can find SQL Generated by Entity framework?

    - by Jalpesh P. Vadgama
    Few days back I was optimizing the performance with Entity framework and Linq queries and I was using LinqPad and looking SQL generated by the Linq or entity framework queries. After some point of time I got the same question in mind that how I can find the SQL Statement generated by Entity framework?. After some struggling I have managed to found the way of finding SQL Statement so I thought it would be a great idea to write a post about  same and share my knowledge about that. So in this post I will explain how to find SQL statements generated Entity framework queries. Read More on dotnetjalps.com

    Read the article

  • Hot Java Content

    - by Tori Wieldt
    It's August, summertime in the United States, and time for many of us to go on vacation. (You'll have to find my personal account to see more photos of the Monterey Bay Aquarium.) Here's some great Java content that you may have missed while I was gone: Blogs  Project Jigsaw: Late for the train: The Q&A JSR 355 Final Release, and moves JCP to version 2.9Oracle releases JDK for Linux ARM, JRE for Mac OS XArchitects and Architecture at JavaOne 2012Java Champions at JavaOne 2012 Podcasts & Videos Java Spotlight Episode 96: Johan Vos on Glassfish and JavaFXJava Spotlight Episode 94: Kirk Pepperdine on Java Performance TuningJava Spotlight Episode 93: Jonathan Giles on JavaFX 2.2 UI ControlsVideo: JavaFX Canvas Node July/August Java Magazine (free subscription) Developer Power: Web-based Development ToolsFork/Join Framework for Client Java ApplicationsIntro to Web Service SecurityHow to Modify javacOracle's Berkeley DB Java Edition's Java API and more. Java Magazine is available on the App Store and the Android Market. Get all this great Java content while it's as hot as a North American (non-San Franciscian) summer. 

    Read the article

  • Most efficient way to rebuild a tree structure from data

    - by Ahsan
    Have a question on recursively populating JsTree using the .NET wrapper available via NuGet. Any help would be greatly appreciated. the .NET class JsTree3Node has a property named Children which holds a list of JsTree3Nodes, and the pre-existing table which contains the node structure looks like this NodeId ParentNodeId Data AbsolutePath 1 NULL News /News 2 1 Financial /News/Financial 3 2 StockMarket /News/Financial/StockMarket I have a EF data context from the the database, so my code currently looks like this. var parentNode = new JsTree3Node(Guid.NewGuid().ToString()); foreach(var nodeItem in context.Nodes) { parentNode.Children.Add(nodeItem.Data); // What is the most efficient logic to do this recursively? } as the inline comment says in the above code, what would be the most efficient way to load the JStree data on to the parentNode object. I can change the existing node table to suite the logic so feel free to suggest any changes to improve performance.

    Read the article

  • What is the difference between Output Text and Output Text (Active)?

    - by [email protected]
    When building an ADF Faces application in JDeveloper, you might have noticed that in the Component Palette there is an option for both "Output Text" as well as "Output Text (Active)".   Why do we have both of these options?Under the covers, there are actually two tags, af:outputText and af:activeOutputText.  Similarly, there is an active version of af:image, namely af:activeImage, and an active version of af:commandToolbarButton, af:activeCommandToolbarButton.In the vast majority of cases, developers should use the non-active version of the components.   The active version of the components are there to support specific usecases around Server Side Push using the Active Data Service feature.  Most of our customers don't use Server Side Push, and hence do not need the active version of the components.  You can learn more about Server Side Push with ADF Active Data Service in this blog.By using the active version of af:outputText, af:image or af:commandToolbarButton when you don't need to, you are taking a performance hit that is unnecessary.

    Read the article

< Previous Page | 447 448 449 450 451 452 453 454 455 456 457 458  | Next Page >