Search Results

Search found 10076 results on 404 pages for 'high volume'.

Page 190/404 | < Previous Page | 186 187 188 189 190 191 192 193 194 195 196 197  | Next Page >

  • What would be the best way to get Apple to donate their JVM-work to OpenJDK?

    - by Thorbjørn Ravn Andersen
    It has been announced that Apple deprecates their JVM. It is a really nice piece of work giving an excellent user experience for Swing application on OS X, and it would be a pity if it just went away. As I see it the only realistic long term alternative to Apples own JVM is the OpenJDK unless Oracle chooses to take over the Apple JVM which I doubt as OS X is not a core platform for Oracle. But for this to work Apple needs to donate their enhancements to OpenJDK, and it needs to be under the GPL. They did so already with WebKit so there is precedent. What would be the best way to make them do so? Make a stackexchange poll? Get James Gosling and other high profile Java persons to say so? Email Steve Jobs? Suggestions? EDIT: Well, Apple has now promised to do so :) Shows that asking on StackExchange really MAKES A DIFFERENCE! Great!

    Read the article

  • Oracle GoldenGate 12c - Leading Enterprise Replication

    - by Doug Reid
    Oracle GoldenGate 12c released  on October 17th and includes several new cutting edge features that firmly establishes GoldenGate's leader position in the data replication space.   In fact, this release more than doubles the performance of data delivery, supports Oracle's new multitenant database feature,  it's more secure, has more options for high availability, and has made great strides to simplify the configuration and deployment of the product.     Read through the press release if you haven't already and do not miss the quote from Cern's Eva Dafonte Perez, regarding Oracle GoldenGate 12c "….performs five times faster compared to previous GoldenGate versions and simplifies the management of a multi-tier environment" There are a variety of new and improved features in the Oracle GoldenGate 12c.  Here are the highlights: Optimized for Oracle Database 12c -  GoldenGate 12c is custom tailored to the unique capabilities of Oracle database 12c and out of the box GoldenGate 12c supports multitenant (pluggable database (PDB)) and non-consolidated deployments of Oracle Database 12c.   The naming convention used by database 12c is now in three parts (PDB-name, schema-name, and object name).  We have made changes to the GoldenGate capture process to support the new naming convention and streamlined the whole process so a single GoldenGate capture process is being used at the container level rather than at each individual PDB.  By having the capture process at the container level resource usage and the number of processes are reduced. To view a conceptual architecture diagram click here. Integrated Delivery for the Oracle Database - Leveraging a lightweight streaming API built exclusively for Oracle GoldenGate 12c, this process distributes load, auto tunes the degree of parallelism, scales better, and delivers blinding rates of changed data delivery to the Oracle database.  One of the goals for Oracle GoldenGate 12c was to reduce IT costs by simplifying the configuration and reduce the time to manage complex infrastructures.  In previous versions of Oracle GoldenGate, customers would split transaction loads by grouping tables into multiple different delivery processes (click here to view the previous method). Each delivery process executed independently and without any interaction or knowledge of other delivery processes.  This setup was complicated to configure and time consuming as the developer needed in-depth knowledge of the source and target schemas and the transaction profile. With GoldenGate 12c and Integrated Delivery we have made it easier to configure and faster to deploy.  To view a conceptual architecture diagram of integrated delivery click here Coordinated Delivery for Non-Oracle Databases - Coordinated Delivery orchestrates high-speed apply processes and simplifies the configuration of GoldenGate for non-Oracle targets. In Oracle GoldenGate 12c a single delivery process is used with multiple threads (click here) and key events, such as primary key updates, event markers, DDL, etc, are coordinated between the various threads to insure that the transactions are applied in the same sequence as they were captured, all while delivery improved performance.  Replication Between On-Premises and Cloud-Based systems. - The trend for business to utilize both on-premises and cloud-based systems is rising and businesses need to replicate data back and forth.   GoldenGate 12c can be configured in a variety of ways to provide real-time replication when unrestricted or restricted (limited ports or HTTP tunneling) networks are between on-premises and cloud-based systems.    Expanded Heterogeneity - It wouldn't be a GoldenGate release without new and improved platform support.   Release 1 includes support for MySQL 5.6 and Sybase 15.7.   Upcoming in the next release GoldenGate, support will be expanded for MS SQL Server, DB2, and Teradata. Tighter Security - Oracle GoldenGate 12c is integrated with the Oracle wallet to shield usernames and passwords using strong encryption and aliases.   Customers accustomed to using the Oracle Wallet with other Oracle products will instantly be familiar with how to use this great new feature Expanded Oracle Application and Technology Support -   GoldenGate can be used along with Oracle Coherence to enable real-time changed data feeds to the Coherence cache using Toplink and the Oracle GoldenGate JMS adapter.     Plus,  Oracle Advanced Customer Services (ACS) now offers a low downtime E-Business Suite platform and database migrations using GoldenGate as the enabling technology.  Keep tuned for more blogs on the new features and the upcoming launch webcast where we will go into these new features in more detail.   In the mean time make sure to read through our white paper "Oracle GoldenGate 12c Release 1 New Features Overview"

    Read the article

  • 15 Oracle Winners at Progressive Manufacturing 100 Awards Event

    - by [email protected]
    Oracle is pleased to congratulate its 15 winners for the PM100 awards program at the Breakers Hotel in Palm Beach Florida, May 3-5, 2010.  The Progressive Manufacturing Summit is where today's top manufacturing executives  come together and share their strategies, experiences and best practices on becoming more competitive in today's global market. The format is extremely interactive, providing the rarest of opportunities to participate in a high level conversation with leaders in supply chain and manufacturing. Attendees walk away with new insights and strategies on growing and moving their business forward, new contacts and a tangible action plan to address a tough. For more information. Event: http://www.managingautomation.com/summit/index.aspx Winners: http://www.managingautomation.com/awards/winners.aspx  

    Read the article

  • First Foray&ndash;About timeout

    - by SQLMonger
    It has been quite a while since I signed up for this blog site and high time that something was posted.  I have a list of topics that I will be working through and posting.  Some I am sure will have been posted by others, but I will be sticking to the technical problems and challenges that I’ve recently faced, and the solutions that worked for me.  My motto when learning something new has always been “My kingdom for an example!”, and I plan on delivering useful examples here so others can learn from my efforts, failures and successes.   A bit of background about me… My name is Clayton Groom. I am a founding partner of a consulting firm in St. Louis Missouri, Covenant Technology Partners, LLC and focus on SQL Server Data Warehouse design, Analysis Services and Enterprise Reporting solutions.  I have been working with SQL Server since the early nineties, when it still only ran on OS/2. I love solving puzzles and technical challenges.   Enough about me… On to a real problem… SSIS Connection Time outs versus Command Time outs Last week, I was working on automating the processing for a large Analysis Services cube.  I had reworked an SSIS package and script task originally posted by Vidas Matelis that automates the process of adding new and dropping old partitions to/from an Analysis Services cube.  I had the package working great, tested, and ready for deployment.  It basically performs a query against the source system to determine if there is new data in the warehouse that will require a new partition to be added to the cube, and it checks the cube to see if there are any partitions that are present that are no longer needed in a rolling 60 month window. My client uses Tivoli for running all their production jobs, and not SQL Agent, so I had to build a command line file for Tivoli to use to run the package. Everything was going great. I had tested the command file from my development workstation using an XML configuration file to pass in server-specific parameters into the package when executed using the DTExec utility. With all the pieces ready, I updated the dtsconfig file to point to the UAT environment and started working with the Tivoli developer to test the job.  On the first run, the job failed, and from what I could see in the SSIS log, it had failed because of a timeout. Other errors in the log made me think that perhaps the connection string had not been passed into the package correctly. We bumped the Connection Manager  timeout values from 20 seconds to 120 seconds and tried again. The job still failed. After changing the command line to use the /SET option instead of the /CONFIGFILE option, we tested again, and again failure. After a number more failed attempts, and getting the Teradata DBA involved to monitor and see if we were connecting and failing or just failing to connect, we determined that the job was indeed connecting to the server and then disconnecting itself after 30 seconds.  This seemed odd, as we had the timeout values for the connection manager set to 180 seconds by then.  At this point one of the DBA’s found a post on the Teradata forum that had the clues to the puzzle: There is a separate “CommandTimeout” custom property on the Data source object that may needed to be adjusted for longer running queries.  I opened up the SSIS package, opened the data flow task that generated the partition list table and right-clicked on the data source. from the context menu, I selected “Show Advanced Editor” and found the property. Sure enough, it was set to 30 seconds. The CommandTimeout property can also be edited in the SSIS Properties sheet. In order to determine how long the timeout needed to be, I ran the query from the task in the development environment and received a response in a matter of seconds.  I then tried the same query against the production database and waited several minutes for a response. This did not seem to be a reasonable response time for the query involved, and indeed it wasn’t. The Teradata DBA’s adjusted the query governor settings for the service account I was testing with, and we were able to get the response back down under a minute.  Still, I set the CommandTimeout property to a much higher value in case the job was ever started during a time of high-demand on the production server. With this change in place, the job finally completed successfully.  The lesson learned for me was two-fold: Always compare query execution times between development and production environments, and don’t assume that production will always be faster.  With higher user demands, query governors, and a whole lot more data, the execution time of even what might seem to be simple queries can vary greatly. SSIS Connection time out settings do not affect command time outs.  Connection timeouts control how long the package will wait for a response from the server before assuming the server is not available or is not responding. Command time outs control how long a task will wait for results to start being returned before deciding that the server is not responding. Both lessons seem pretty straight forward, and I felt pretty sheepish once I finally figured out what the issue was.  To be fair though, In the 5+ years that I have been working with SSIS, I could only recall one other time where I had to set the CommandTimeout property, and that memory only resurfaced while I was penning this post.

    Read the article

  • Random number generation algorithm for human brains?

    - by Magnus Wolffelt
    Are you aware of, or have you devised, any practical, simple-to-learn "in-head" algorithms that let humans generate (somewhat "true") random numbers? By "in-head" I mean.. preferrably without any external tools or devices. Also, a high output (many random numbers per minute) is desirable. Asked this on SO but it didn't get much interest. Maybe this is better suited for programmers.. :) I'm genuinely curious about anything that people might have come up with on this problem.

    Read the article

  • Was it necessary to build this site in ASP.NET ?

    - by Andrew M
    From what I'm told, the whole StackOverflow/StackExchange 'stack' is based on Microsoft's ASP.NET. SO and the SE sites are probably the most complex that I visit on a regular basis. There's a lot going on in every page - lots of different boxes, pulling data from different places and changing dynamically and responding to user interaction. And the sites work very smoothly, despite the high traffic. My question is, could this have been achieved using a different platform/framework? Does ASP.NET lend itself to more complex projects where other web frameworks would strain and falter? Or is the choice pretty incidental?

    Read the article

  • QotD: Sharat Chander on Java Embedded @ JavaOne

    - by $utils.escapeXML($entry.author)
    This year, JavaOne is expanding to offer business leaders a chance to participate, as well. I'm very proud to announce the deployment of "Java Embedded @ JavaOne." With the explosion of new unconnected devices and data creation, a new IT revolution is taking place in the embedded space. This net-new conference will specifically contain business content addressing the growing embedded ecosystem.As part of the "Java Embedded @ JavaOne" call-for-papers (CFP), interested speakers can continue forward and make business submissions, and due to high interest they also have the additional opportunity to make technical submissions for the flagship JavaOne conference, but _*ONLY*_ for the "Java ME, Java Card, Embedded and Devices" track. Sharat Chander in a set of posts on Java Embedded @ JavaOne to the JUG Leaders mailing list.

    Read the article

  • Developing an ELO like point system for a multiplayer gaming site

    - by Alejandro Piad
    I'm currently working on a gaming site where users will submit virtual players for different games, like Chess, Nash, Backgammon, Go, etc. The idea is that users don't compete themselves, but through their virtual players. There will be leagues, tournaments, and other competition formats. The question is which would be a good rating system for users in this environment. Take into account that every user may have many different virtual players playing in many different games. As a general guideline I would like to guarantee the following properties: Users who have a lot of mediocre players should not score higher than users with a few very good players. A user with a high rating should not be penalized if he adds a new bad player, until he has had enough time to improve his player. Users who don't play often should not score higher than users who play every day. Thanks in advance.

    Read the article

  • 2011 The Year of Awesomesauce

    - by MOSSLover
    So I was talking to one of my friends, Cathy Dew, and I’m wondering how to start out this post.  What kind of title should I put?  Somehow we’re just randomly throwing things out and this title pops into my head the one you see above. I woke up today to the buzz of a text message.  I spent New Years laying around until 3 am watching Warehouse 13 Episodes and drinking champagne.  It was one of the best New Year’s I spent with my boyfriend and my cat.  I figured I would sleep in until Noon, but ended up waking up around 11:15 to that text message buzz.  I guess my DE, Rachel Appel, had texted me “Happy New Years”, because Rachel is that kind of person.  I immediately proceeded to check my email.  I noticed my live account had a hit.  The account I rarely ever use had an email.  I sort of had that sinking suspicion I was going to get Silverlight MVP right?  So I open the email and something out of the blue happens it says “blah blah blah SharePoint Server MVP blah blah…”.  I’m sitting here a little confused what?  Really?  Just about when you give up on something the unexplained happens.  I am grateful for what I have every day. So let me tell you a story.  I was a senior in high school and it was December 31st, 1999.  A couple days prior my grandmother was complaining she had a cold and her assisted living facility was not going to let her see a doctor.  She claimed to be very sick.  New Year’s Eve Day 1999 my grandmother was rushed to the hospital sometime very early in the morning.  My uncle, my little brother, and myself were sitting in the waiting room eagerly awaiting news.  The Sydney Opera House was playing in the background as New Years 2000 for Australia was ringing in.  They come out and they tell us my grandmother has pneumonia.  She is in the ICU in critical condition.  Eventually time passes in the day and my parents take my brother and I home.  So in the car we had a huge fight that ended in the worst new years of my life.  The next 30 days were the worst 30 days of my life.  I went to the hospital every single day to do my homework and watch my grandmother.  Each day was a challenge mentally and physically as my grandmother berated me in her demented state.  On the 30th day my grandmother ended up in critical condition in the ICU maxed out on painkillers.  At approximately 3 am I hear my parents telling me they don’t want to wake me up and that my grandmother had passed away.  I must have cried more collectively that day than any other day in my life.  Every New Years Even since I have cried thinking about who she was and what she represented.  She was human looking back she wasn’t anything great, but she was one of the positive lights in my life.  Her and my dad and my other grandmother constantly tried to make me feel great when my mother was telling me the opposite.  I’d like to think since 2000 the past 11 years have been the best 11 years of my life.  I got out of a bad situation by using the tools that I had in front of me.  Good grades and getting into a college so I could aspire to be the person that I wanted to be.  I had some great people along the way to help me out. So getting to the point I like to help people further there lives somehow in the best way I can possibly help out.  This New Years was one of the great years that helped me forget the past and focus on the present.  It makes me realize how far I’ve come since high school and even since college.  The one thing I’ve been grappling with over the years is how do you feel good about making money while helping others out.  I’d to think I try really hard to give back to my community.  I could not have done what I did without other people’s help.  I sent out an email prior to even announcing I got the award today.  I can’t say I did everything on my own.  It’s not possible.  I had the help of others every step of the way.  I’m not sure if this makes sense but the award can’t just be mine.  This award is really owned by each and everyone who helped me get here.  From my dad to my grandmother to Rachel Appel to Bob Hunt to Jason Gallicchio to Cathy Dew to Mark Rackley to Johnny Ennion to Lee Brandt to Jeff Julian to John Alexander to Lori Gowin and to many others.  Thank you guys for all the help and support. Technorati Tags: SharePoint Community,MVP Award,Microsoft Community

    Read the article

  • Health problem of a programmer

    - by gunbuster363
    Hi all, I've been annoyed by this fingers ache for quite a long time, my fingers ache because of too much mouse clicking during office hour plus play games after work. I forget game for a while and my fingers are getting better, but still my right pointing finger would feel pressure when I click the mouse. I haven't go to a doctor because I afraid the fee would be high and he would just suggest me too get rest for the fingers, also, I don't know what kind of doctor should I go and see. My fingers get less pressure if I use my expensive deathadder ( what a shame, I bought this for gaming, but now I use it for rest ) at home because its buttons are softer, however I cannot have such expensive mouse at my office because I am afraid people would steal it. I use some trick when I am using the mouse such as single-click open a file, adding more shortcuts at desktop for common jobs, do you guys have some other tips for me? Thank you.

    Read the article

  • DIY Photo Rig Takes Laser-Triggered 3D Insect Photos

    - by Jason Fitzpatrick
    How do you catch a butterfly in flight and in 3D? You do it with this laser triggered photo rig. This it yourself monster is an absolute beauty of at-home engineering. It has dual focus planes, dual flashes, a laser trigger, and enough machined aluminum to make us wish we had a CNC out in the garage. If you’re one part photographer, one part electronics tinker, and one part machinist, this is the kind of weekend project that will cement you into neighborhood DIY lore. Hit up the link below for a full build guide and sample photos. High-Speed 3D Portable Macro Unit [via DIY Photography] How to Make the Kindle Fire Silk Browser *Actually* Fast! Amazon’s New Kindle Fire Tablet: the How-To Geek Review HTG Explains: How Hackers Take Over Web Sites with SQL Injection / DDoS

    Read the article

  • Computer Networks UNISA - Chap 15 &ndash; Network Management

    - by MarkPearl
    After reading this section you should be able to Understand network management and the importance of documentation, baseline measurements, policies, and regulations to assess and maintain a network’s health. Manage a network’s performance using SNMP-based network management software, system and event logs, and traffic-shaping techniques Identify the reasons for and elements of an asset managements system Plan and follow regular hardware and software maintenance routines Fundamentals of Network Management Network management refers to the assessment, monitoring, and maintenance of all aspects of a network including checking for hardware faults, ensuring high QoS, maintaining records of network assets, etc. Scope of network management differs depending on the size and requirements of the network. All sub topics of network management share the goals of enhancing the efficiency and performance while preventing costly downtime or loss. Documentation The way documentation is stored may vary, but to adequately manage a network one should at least record the following… Physical topology (types of LAN and WAN topologies – ring, star, hybrid) Access method (does it use Ethernet 802.3, token ring, etc.) Protocols Devices (Switches, routers, etc) Operating Systems Applications Configurations (What version of operating system and config files for serve / client software) Baseline Measurements A baseline is a report of the network’s current state of operation. Baseline measurements might include the utilization rate for your network backbone, number of users logged on per day, etc. Baseline measurements allow you to compare future performance increases or decreases caused by network changes or events with past network performance. Obtaining baseline measurements is the only way to know for certain whether a pattern of usage has changed, or whether a network upgrade has made a difference. There are various tools available for measuring baseline performance on a network. Policies, Procedures, and Regulations Following rules helps limit chaos, confusion, and possibly downtime. The following policies and procedures and regulations make for sound network management. Media installations and management (includes designing physical layout of cable, etc.) Network addressing policies (includes choosing and applying a an addressing scheme) Resource sharing and naming conventions (includes rules for logon ID’s) Security related policies Troubleshooting procedures Backup and disaster recovery procedures In addition to internal policies, a network manager must consider external regulatory rules. Fault and Performance Management After documenting every aspect of your network and following policies and best practices, you are ready to asses you networks status on an on going basis. This process includes both performance management and fault management. Network Management Software To accomplish both fault and performance management, organizations often use enterprise-wide network management software. There various software packages that do this, each collect data from multiple networked devices at regular intervals, in a process called polling. Each managed device runs a network management agent. So as not to affect the performance of a device while collecting information, agents do not demand significant processing resources. The definition of a managed devices and their data are collected in a MIB (Management Information Base). Agents communicate information about managed devices via any of several application layer protocols. On modern networks most agents use SNMP which is part of the TCP/IP suite and typically runs over UDP on port 161. Because of the flexibility and sophisticated network management applications are a challenge to configure and fine-tune. One needs to be careful to only collect relevant information and not cause performance issues (i.e. pinging a device every 5 seconds can be a problem with thousands of devices). MRTG (Multi Router Traffic Grapher) is a simple command line utility that uses SNMP to poll devices and collects data in a log file. MRTG can be used with Windows, UNIX and Linux. System and Event Logs Virtually every condition recognized by an operating system can be recorded. This is typically done using event logs. In Windows there is a GUI event log viewer. Similar information is recorded in UNIX and Linux in a system log. Much of the information collected in event logs and syslog files does not point to a problem, even if it is marked with a warning so it is important to filter your logs appropriately to reduce the noise. Traffic Shaping When a network must handle high volumes of network traffic, users benefit from performance management technique called traffic shaping. Traffic shaping involves manipulating certain characteristics of packets, data streams, or connections to manage the type and amount of traffic traversing a network or interface at any moment. Its goals are to assure timely delivery of the most important traffic while offering the best possible performance for all users. Several types of traffic prioritization exist including prioritizing traffic according to any of the following characteristics… Protocol IP address User group DiffServr VLAN tag in a Data Link layer frame Service or application Caching In addition to traffic shaping, a network or host might use caching to improve performance. Caching is the local storage of frequently needed files that would otherwise be obtained from an external source. By keeping files close to the requester, caching allows the user to access those files quickly. The most common type of caching is Web caching, in which Web pages are stored locally. To an ISP, caching is much more than just convenience. It prevents a significant volume of WAN traffic, thus improving performance and saving money. Asset Management Another key component in managing networks is identifying and tracking its hardware. This is called asset management. The first step to asset management is to take an inventory of each node on the network. You will also want to keep records of every piece of software purchased by your organization. Asset management simplifies maintaining and upgrading the network chiefly because you know what the system includes. In addition, asset management provides network administrators with information about the costs and benefits of certain types of hardware or software. Change Management Networks are always in a stage of flux with various aspects including… Software changes and patches Client Upgrades Shared Application Upgrades NOS Upgrades Hardware and Physical Plant Changes Cabling Upgrades Backbone Upgrades For a detailed explanation on each of these read the textbook (Page 750 – 761)

    Read the article

  • Analyze Drupal and Wordpress sites CPU load in shared server

    - by Tedi
    Our hosting company is complaining that both our Drupal and Wordpress websites running in a shared server are consuming too many CPU resources. The traffic for each site is not more than 100 users per day and, at a first glance, we don't have very many plugins/add-ons. Is there any tool or resource to analyse what is causing that high CPU load? Thanks Update: We decided to suspend our accounts while the problem was being debugged but still our hosting (Site5) said that they saw unacceptable activity on our sites so we had to move to a dedicated server... asked them several times to provide us with more information and they always came back saying that we had to purchase a higher account. Finally decided to move to another hosting service.

    Read the article

  • Open source framework quality [closed]

    - by Jonas Byström
    It's not hard to find snippets, components or tools/toolkits in the open source world which holds the quality bar really high. Myself I use git, python, linux, gcc, bash and a whole range of others on a daily basis, and I love them. But when it comes to bigger frameworks, which are intended for facilitating larger tasks of an application without much interference, I'm not as enthusiastic. I've tried a few commercial frameworks (game engines), which were okay, but all big open source frameworks which I've used myself, or which I have seen used in applications were decidedly worse than the commercial equivalent. But I'm not sure if my experience was typical. Where have bigger open source frameworks for facilitating larger tasks of an application been able to equal or exceed commercial frameworks, and how were they better?

    Read the article

  • Something confusing about Single Responsibility Principle

    - by user1483278
    1) In fact if two responsibilities are always expected to change at the same time you arguably should not separate them into different classes as this would lead, to quote Martin, to a "smell of Needless Complexity". The same is the case for responsibilities that never change - the behavior is invariant, and there is no need to split it. I assume even if non-related responsibilities are always expected to change for the same reason ( or if they never change ), we still shouldn't put them in the same class, since this would still violate high cohesion principle? 2) I've found two quite different definitions for SRP: Single Responsibility Principle says that a subsystem, module, class, or even a function, should not have more than one reason to change. and There should never be more than one reason for a class to change Doesn't the latter definition narrow SRP to a class level? If so, isn't first quote wrong by claiming that SRP can also be applied at subsystem, module and function levels? thank you

    Read the article

  • Securely expose WebService from Enterprise Network to Internet Client

    - by hotzen
    Are there any standards (or certified solutions) to expose a (Web-)Service to the internet from a very security-sensitive network (e.g. Banking/Finance)? I am not specifically talking about WS-* or any other transport-layer security á la SSL/TLS, rather about important standards or certifications that must be obeyed. Are there any known products (coming from an SAP-environment) that can provide a "high-security proxy" of some sort to expose specific web-services to the internet? Any buzzwords that a CIO/CTO is aware of about this subject?

    Read the article

  • Star Wars: An Infographic Flowchart

    - by Jason Fitzpatrick
    If you can’t get enough of Star Wars lore, this minimalist set of infographics details major characters, conflicts, and alliances in the Star Wars universe. Courtesy of designer Marc Morera, the series of Star Wars infographics give a quick summary, presents all the major players in the movies, and connects all the players and events via flowchart. Hit up the link below to see all of them in their high-resolution glory. Star Wars Infographic [via Cool Infographics] How To Create a Customized Windows 7 Installation Disc With Integrated Updates How to Get Pro Features in Windows Home Versions with Third Party Tools HTG Explains: Is ReadyBoost Worth Using?

    Read the article

  • Cloud INaaS from Data Integration companies

    - by llaszews
    Traditional integration IT vendors are also starting to offer INaaS. Infomatica has been the most aggressive integration vendor when it comes to offering INaaS. Informatica has offered INaaS for over five years and continues to add capabilities, has a number of high profile references, and also continues to add out-of-the-box cloud integration with major COTS and SaaS providers. The Informatica Marketplace contains pre-packaged Informatica Cloud end-points and plug-ins. One such MarketPlace solution, is integration with Oracle E-Business Suite using Informatica integration. The Informatica E-Business Suite INaaS offering includes automatic loading and extraction of data between Salesforce CRM and on-premise systems, cloud-to-cloud, flat files, and relational database. The entire Informatica Cloud integration solution runs in an Informatica managed facility (PaaS). When running in a PaaS environment, Informatica offers an option to keep an exact copy of your cloud-based data on-premise for archival, compliance, and enterprise reporting requirements.

    Read the article

  • Writing Large Portions Of Code Then Debugging?

    - by The Floating Brain
    Lately I have been writing a game engine, and I have been writing a lot of "foundation stuff" (standard interfaces, modules, a message system ect.), but I have noticed a pattern, a lot of the stuff is interdependent and I can not debug until everything is done, hence I do not debug for about 3 to 5 hours at a time. I am wondering if this is an acceptable practice for this part of the project, and if not, if anyone can give me some advice? -----Update-----: I downloaded some code metrics tools, and my programs cyclomatic complexity is 1.52 which as I understand it is good, and should correlate to high cohesion, if I am wrong please correct me/

    Read the article

  • Effects to make a speeding spaceship look faster

    - by Badescu Alexandru
    I have a spaceship and I've created a "boost" functionality that speeds up my spaceship, what effects should I implement to create the impression of high speed? I was thinking of making everything except my spaceship blurry but I think there would be something missing. Any ideas? Btw. I am working in XNA C# but if you aren't familiar to XNA describing some effects is still useful. The Game is 3d and i've attached some printscreens of the game This is in normal mode ( none boosted ) and here is the boosted mode ( the craft speeds up forward while the camera speeds in its normal speed , the non boosted speed )

    Read the article

  • Webinar Recording on Cross Platform Development with MonoTouch and Mono for Android

    - by Wallym
    The iPhone and Android are dominant in the marketplace. The two platforms currently have 85% of the smartphone marketplace and are continuing to grow that marketshare. Developers are being tasked with targeting these two platforms. In this session, we’ll take a high level look at how we can use c# and .NET knowledge to share code between iOS and and Android. We’ll look at linked files, using the Xamarin Mobile API, the challenges of running across platforms and frameworks, as well as other features of Visual Studio, Monotouch, MonoDevelop, and Mono for Android that allows us to write as much code that can run on both platforms.The following link is a recording on Cross Platform Development with MonoTouch and Mono for Android. I am guessing that the link only works in IE. That's out of my control.

    Read the article

  • Oracle Virtual Networking Partner Sales Playbook Now Available

    - by Cinzia Mascanzoni
    Oracle Virtual Networking Partner Sales Playbook now available to partners registered in OPN Server and Storage Systems Knowledge Zones. Equips you to sell, identify and qualify opportunities, pursue specific sales plays, and deliver competitive differentiation. Find out where you should plan to focus your resources, and how to broaden your offerings by leveraging the OPN Specialized enablement available to your organization. Playbook is accessible to member partners through the following Knowledge Zones: Sun x86 Servers, Sun Blade Servers, SPARC T-Series Servers, SPARC Enterprise High-End M-Series Servers, SPARC Enterprise Entry-Level and Midrange M-Series Servers, Oracle Desktop Virtualization, NAS Storage, SAN Storage, Sun Flash Storage, StorageTek Tape Storage.

    Read the article

  • How to pay for servers without ads ?

    - by Matthieu
    Hi everyone ! I have a website whose goal is to provide a free educational service (like wikipedia does, not on the same scale obviously). I wonder how I will continue to pay for the servers when I reach high traffic. I don't want to use any ads or sponsor things, because this is not the goal of the website (just like wikipedia once again). Does "please donate" works ? If no, is there any alternative to a "private/premium not free" part in the website ? Thanks

    Read the article

  • Speaking at AMD Fusion conference

    - by Daniel Moth
    Next Wednesday at 2pm I will be presenting a session at the AMD Fusion developer summit in Bellevue, Washington State. For more on this conference please visit the official website. If you filter the catalog by 'Speaker Last Name' to "Moth", you'll find my talk. For your convenience, below is the title and abstract Blazing-fast code using GPUs and more, with Microsoft Visual C++ To get full performance out of mainstream hardware, high-performance code needs to harness, not only multi-core CPUs, but also GPUs (whether discrete cards or integrated in the processor) and other compute accelerators to achieve orders-of-magnitude speed-up for data parallel algorithms. How can you as a C++ developer fully utilize all that heterogeneous hardware from your Visual Studio environment? How can your code benefit from this tremendous performance boost without sacrificing your developer productivity or the portability of your solution? The answers will be presented in this session that introduces a new technology from Microsoft. Hope to see many of you there! Comments about this post welcome at the original blog.

    Read the article

  • Lost the config of my monitor once I connected it to a KVM switch.

    - by jfmessier
    I used to have a monitor (Acer 20", AL2017) connected to the external video port of my laptop. Everything was great and I could get a rather high resolution available when setting it under Ubuntu (all versions). But since I connected it to a KVM, the monitor is no more recognized, and I cannot go beyond 1024x768. My video adapter is an Intel chipset 9xx from an older Pentium Core 2 Duo laptop. The same monitor, using another video chipset (Intel 82G33/G31 rev 02) is properly detected through the switch box. Looks to me like the video chipset is not the best, and I can live with this. I see that I have no xorg.conf, and I understand that I would need to generate one so I can then use it to force the available modes on the X system. How can I generate the xorg.conf file for my Intel video chip, so I can use it then ? Merci :-)

    Read the article

< Previous Page | 186 187 188 189 190 191 192 193 194 195 196 197  | Next Page >