Search Results

Search found 6682 results on 268 pages for 'edge cases'.

Page 128/268 | < Previous Page | 124 125 126 127 128 129 130 131 132 133 134 135  | Next Page >

  • Breadcrumbs in a modern web application, make sense? [on hold]

    - by Xtreme Biker
    I'm currently beginning with the development of a new web application. The whole web application is going to be bookmarkable and all the pages accesible via GET requests and url parameters. Having said that, let's suppose I've got three entities in my application, Customer, Team and City. Each Customer and Team belong to a city and I've got a city-detail page which displays the detail for a concrete city. So next navigation cases are possible: Customers - Customer detail (id=2) - City detail (id=3) Football teams - Team detail (id=5) - City detail (id=3) Cities - City detail (id=3) There are three possible ways of ending up in a city detail view. My question is, does it make sense to implement a breadcrumb to show such a history, having it available in the browser itself? Would it be more appropiate to show a breadcrumb with the last case, no matter where we're coming from (hierarchical breadcrumb)? That's what Jakob Nielsen points out here: Offering users a Hansel-and-Gretel-style history trail is basically useless, because it simply duplicates functionality offered by the Back button, which is the Web’s second-most-used feature. A history trail can also be confusing: users often wander in circles or go to the wrong site sections. Having each point in a confused progression at the top of the current page doesn’t offer much help. Finally, a history trail is useless for users who arrive directly at a page deep within the site. Also, even if the history trail seems the most natural way to implement it, it requires an extra effort to keep the whole track being HTTP a stateless mean.

    Read the article

  • What can you do to decrease the number of live issues with applications?

    - by User Smith
    First off I have seen this post which is slightly similar to my question. : What can you do to decrease the number of deployment bugs of a live website? Let me layout the situation for you. The team of programmers that I belong to have metrics associated with our code. Over the last several months our errors in our live system have increased by a large amount. We require that our updates to applications be tested by at least one other programmer prior to going live. I personally am completely against this as I think that applications should be tested by end users as end users are much better testers than programmers, I am not against programmers testing, obviously programmers need to test code, but they are most of the times too close to the code. The reason I specify that I think end users should test in our scenario is due to the fact that we don't have business analysts, we just have programmers. I come from a background where BAs took care of all the testing once programmers checked off it was ready to go live. We do have a staging environment in place that is a clone of the live environment that we use to ensure that we don't have issues between development and live environments this does catch some bugs. We don't do end user testing really at all, I should say we don't really have anyone testing our code except programmers, which I think gets us into this mess (Ideally, we would have BAs or QA or professional testers test). We don't have a QA team or anything of that nature. We don't have test cases for our projects that are fully laid out. Ok, I am just a peon programmer at the bottom of the rung, but I am probably more tired of these issues than the managers complaining about them. So, I don't have the ability to tell them you are doing it all wrong.....I have tried gentle pushes in the correct direction. Any advice or suggestions on how to alleviate this issue is greatly appreciated. Thanks.

    Read the article

  • New SQLOS features in SQL Server 2012

    - by SQLOS Team
    Here's a quick summary of SQLOS feature enhancements going into SQL Server 2012. Most of these are already in the CTP3 pre-release, except for the Resource Governor enhancements which will be in the release candidate. We've blogged about a couple of these items before. I plan to add detail. Let me know which ones you'd like to see more on: - Memory Manager Redesign: Predictable sizing and governing SQL memory consumption: sp_configure ‘max server memory’ now limits all memory committed by SQL ServerResource Governor governs all SQL memory consumption (other than special cases like buffer pool) Improved scalability of complex queries and operations that make >8K allocations Improved CPU and NUMA locality for memory accesses Single memory manager that handles page allocations of all sizes Consistent Out-of-memory handling & management across different internal components - Optimized Memory Broker for Column Store indexes (Project Apollo) - Resource Governor Support larger scale multi-tenancy by increasing Max. number of resource pools20 -> 64 [for 64-bit] Enable predictable chargeback and isolation by adding a hard cap on CPU usage Enable vertical isolation of machine resources Resource pools can be affinitized to individual or groups of schedulers or to NUMA nodes New DMV for resource pool affinity  - CLR 4 support, adds .NET Framework 4 advantages - sp_server_dianostics Captures diagnostic data and health information about SQL Server to detect potential failures Analyze internal system state Reliable when nothing else is working   - New SQLOS DMVs (in 2008 R2SP1) SQL Server related configuration - New DMVsys.dm_server_services OS related resource configurationNew DMVssys.dm_os_volume_statssys.dm_os_windows_infosys.dm_server_registry XEvents for SQL and OS related Perfmon counters Extend sys.dm_os_sys_info See previous blog posts here and here. - Scale / Mission critical Increased scalability: Support Windows 8 max memory and logical processorsDynamic Memory support in Standard Edition - Hot-Add Memory enabled when virtualized - Various Tier1 Performance Improvements, including reduced instructions for superlatches. Originally posted at http://blogs.msdn.com/b/sqlosteam/

    Read the article

  • Gathering statistics for an Oracle WebCenter Content Database

    - by Nicolas Montoya
    Have you ever heard: "My Oracle WebCenter Content instance is running slow. I checked the memory and CPU usage of the application server and it has plenty of resources. What could be going wrong?An Oracle WebCenter Content instance runs on an application server and relies on a database server on the back end. If your application server tier is running fine, chances are that your database server tier may host the root of the problem. While many things could cause performance problems, on active Enterprise Content Management systems, keeping database statistics updated is extremely important.The Oracle Database have a set of built-in optimizer utilities that can help make database queries more efficient. It is strongly recommended to update or re-create the statistics about the physical characteristics of a table and the associated indexes in order to maximize the efficiency of optimizers. These physical characteristics include: Number of records Number of pages Average record length The frequency with which you need to update statistics depends on how quickly the data is changing. Typically, statistics should be updated when the number of new items since the last update is greater than ten percent of the number of items when the statistics were last updated. If a large amount of documents are being added or removed from the system, the a post step should be added to gather statistics upon completion of this massive data change. In some cases, you may need to collect statistics in the middle of the data processing to expedite its execution. These proceses include but are not limited to: data migration, bootstrapping of a new system, records management disposition processing (typically at the end of the calendar year), etc. A DOCUMENTS table with a ten million rows will often generate a very different plan than a table with just a thousand.A quick check of the statistics for the WebCenter Content (WCC) Database could be performed via the below query:SELECT OWNER, TABLE_NAME, NUM_ROWS, BLOCKS, AVG_ROW_LEN,TO_CHAR(LAST_ANALYZED, 'MM/DD/YYYY HH24:MI:SS')FROM DBA_TABLESWHERE TABLE_NAME='DOCUMENTS';OWNER                          TABLE_NAME                       NUM_ROWS------------------------------ ------------------------------ ----------    BLOCKS AVG_ROW_LEN TO_CHAR(LAST_ANALYZ---------- ----------- -------------------ATEAM_OCS                      DOCUMENTS                            4172        46          61 04/06/2012 11:17:51This output will return not only the date when the WCC table DOCUMENTS was last analyzed, but also it will return the <DATABASE SCHEMA OWNER> for this table in the form of <PREFIX>_OCS.This database username could later on be used to check on other objects owned by the WCC <DATABASE SCHEMA OWNER> as shown below:SELECT OWNER, TABLE_NAME, NUM_ROWS, BLOCKS, AVG_ROW_LEN,TO_CHAR(LAST_ANALYZED, 'MM/DD/YYYY HH24:MI:SS')FROM DBA_TABLESWHERE OWNER='ATEAM_OCS'ORDER BY NUM_ROWS ASC;...OWNER                          TABLE_NAME                       NUM_ROWS------------------------------ ------------------------------ ----------    BLOCKS AVG_ROW_LEN TO_CHAR(LAST_ANALYZ---------- ----------- -------------------ATEAM_OCS                      REVISIONS                            2051        46         141 04/09/2012 22:00:22ATEAM_OCS                      DOCUMENTS                            4172        46          61 04/06/2012 11:17:51ATEAM_OCS                      ARCHIVEHISTORY                       4908       244         218 04/06/2012 11:17:49OWNER                          TABLE_NAME                       NUM_ROWS------------------------------ ------------------------------ ----------    BLOCKS AVG_ROW_LEN TO_CHAR(LAST_ANALYZ---------- ----------- -------------------ATEAM_OCS                      DOCUMENTHISTORY                      5865       110          72 04/06/2012 11:17:50ATEAM_OCS                      SCHEDULEDJOBSHISTORY                10131       244         131 04/06/2012 11:17:54ATEAM_OCS                      SCTACCESSLOG                        10204       496         268 04/06/2012 11:17:54...The Oracle Database allows to collect statistics of many different kinds as an aid to improving performance. The DBMS_STATS package is concerned with optimizer statistics only. The database sets automatic statistics collection of this kind on by default, DBMS_STATS package is intended for only specialized cases.The following subprograms gather certain classes of optimizer statistics:GATHER_DATABASE_STATS Procedures GATHER_DICTIONARY_STATS Procedure GATHER_FIXED_OBJECTS_STATS Procedure GATHER_INDEX_STATS Procedure GATHER_SCHEMA_STATS Procedures GATHER_SYSTEM_STATS Procedure GATHER_TABLE_STATS ProcedureThe DBMS_STATS.GATHER_SCHEMA_STATS PL/SQL Procedure gathers statistics for all objects in a schema.DBMS_STATS.GATHER_SCHEMA_STATS (    ownname          VARCHAR2,    estimate_percent NUMBER   DEFAULT to_estimate_percent_type                                                 (get_param('ESTIMATE_PERCENT')),    block_sample     BOOLEAN  DEFAULT FALSE,    method_opt       VARCHAR2 DEFAULT get_param('METHOD_OPT'),   degree           NUMBER   DEFAULT to_degree_type(get_param('DEGREE')),    granularity      VARCHAR2 DEFAULT GET_PARAM('GRANULARITY'),    cascade          BOOLEAN  DEFAULT to_cascade_type(get_param('CASCADE')),    stattab          VARCHAR2 DEFAULT NULL,    statid           VARCHAR2 DEFAULT NULL,    options          VARCHAR2 DEFAULT 'GATHER',    objlist          OUT      ObjectTab,   statown          VARCHAR2 DEFAULT NULL,    no_invalidate    BOOLEAN  DEFAULT to_no_invalidate_type (                                     get_param('NO_INVALIDATE')),  force             BOOLEAN DEFAULT FALSE);There are several values for the OPTIONS parameter that we need to know about: GATHER reanalyzes the whole schema     GATHER EMPTY only analyzes tables that have no existing statistics GATHER STALE only reanalyzes tables with more than 10 percent modifications (inserts, updates,   deletes) GATHER AUTO will reanalyze objects that currently have no statistics and objects with stale statistics. Using GATHER AUTO is like combining GATHER STALE and GATHER EMPTY. Example:exec dbms_stats.gather_schema_stats( -   ownname          => '<PREFIX>_OCS', -   options          => 'GATHER AUTO' -);

    Read the article

  • Is micro-optimisation important when coding?

    - by BozKay
    I recently asked a question on stackoverflow.com to find out why isset() was faster than strlen() in php. This raised questions around the importance of readable code and whether performance improvements of micro-seconds in code were worth even considering. My father is a retired programmer, I showed him the responses and he was absolutely certain that if a coder does not consider performance in their code even at the micro level, they are not good programmers. I'm not so sure - perhaps the increase in computing power means we no longer have to consider these kind of micro-performance improvements? Perhaps this kind of considering is up to the people who write the actual language code? (of php in the above case). The environmental factors could be important - the internet consumes 10% of the worlds energy, I wonder how wasteful a few micro-seconds of code is when replicated trillions of times on millions of websites? I'd like to know answers preferably based on facts about programming. Is micro-optimisation important when coding? EDIT : My personal summary of 25 answers, thanks to all. Sometimes we need to really worry about micro-optimisations, but only in very rare circumstances. Reliability and readability are far more important in the majority of cases. However, considering micro-optimisation from time to time doesn't hurt. A basic understanding can help us not to make obvious bad choices when coding such as if (expensiveFunction() && counter < X) Should be if (counter < X && expensiveFunction()) (example from @zidarsk8) This could be an inexpensive function and therefore changing the code would be micro-optimisation. But, with a basic understanding, you would not have to because you would write it correctly in the first place.

    Read the article

  • Architectural and Design Challenges with SOA

    With all of the hype about service oriented architecture (SOA) primarily through the use of web services, not much has been said about potential issues of using SOA in the design of an application. I am personally a fan of SOA, but it is not the solution for every application. Proper evaluation should be done on all requirements and use cases prior to deciding to go down the SOA road. It is important to consider how your application/service will handle the following perils as it executes. Example Challenges of SOA Network Connectivity Issues Handling Connectivity Issues Longer Processing/Transaction Times How many of us have had issues visiting our favorite web sites from time to time? The same issue will occur when using service based architecture especially if it is implemented using web services. Forcing applications to access services via a network connection introduces a lot of new failure points to the application. Potential failure points include: DNS issues, network hardware issues, remote server issues, and the lack of physical network connections. When network connectivity issues do occur, how are the service clients are implemented is very important. Should the client wait and poll the service until it is accessible again? If so what is the maximum wait time or number of attempts it should retry. Due to the fact of services being distributed across a network automatically increase the responsiveness of client applications due to the fact that processing time must now also include time to send and receive messages from called services. This could add nanoseconds to minutes per each request based on network load and server usage of the service provider. If speed highly desirable quality attribute then I would consider creating components that are hosted where the client application is located. References: Rader, Dave. (2002). Overcoming Web Services Challenges with Smart Design: http://soa.sys-con.com/node/39458

    Read the article

  • Data migration - dangerous or essential?

    - by MRalwasser
    The software development department of my company is facing with the problem that data migrations are considered as potentially dangerous, especially for my managers. The background is that our customers are using a large amount of data with poor quality. The reasons for this is only partially related to our software quality, but rather to the history of the data: Most of them have been migrated from predecessor systems, some bugs caused (mostly business) inconsistencies in the data records or misentries by accident on the customer's side (which our software allowed by error). The most important counter-arguments from my managers are that faulty data may turn into even worse data, the data troubles may awake some managers at the customer and some processes on the customer's side may not work anymore because their processes somewhat adapted to our system. Personally, I consider data migrations as an integral part of the software development and that data migration can been seen to data what refactoring is to code. I think that data migration is an essential for creating software that evolves. Without it, we would have to create painful software which somewhat works around a bad data structure. I am asking you: What are your thoughts to data migration, especially for the real life cases and not only from a developer's perspecticve? Do you have any arguments against my managers opinions? How does your company deal with data migrations and the difficulties caused by them? Any other interesting thoughts which belongs to this topics?

    Read the article

  • What would you do if you were asked to recommend on someone you are not professionally satisfied with?

    - by Hila
    Where I live, everyone in the IT business knows just about everyone else. This is why it is quite common here to get a phone call from a recruiter asking for your professional opinion regarding people you've been working with in the past, or to be asked by a friend for a recommendation. This is all nice and well until you are asked to recommend on someone you weren't quite satisfied with professionally. There are several problems I can think about: Recommending on unskilled people is generally inadvisable. It is unprofessional and hurts your reputation. Giving this person a bad recommendation will probably hurt his chances of getting the job, and refusing to recommend on someone is just as bad as giving a bad recommendation. It may be that the new employer will be happy with this person's skills, is it fair to deny this guy of the chance to start a new page and prove himself in a new place? Many times you really like this person and are very uncomfortable with the idea of giving him a bad recommendation or refusing his request to recommend on him. What would you do in each of this cases: If this person asked you to recommend on him personally If you got a phone call from a recruiter asking for your opinion on him Thanks!

    Read the article

  • Announcing Oracle Database Mobile Server 11gR2

    - by Eric Jensen
    I'm pleased to announce that Oracle Database Mobile Server 11gR2 has been released. It's available now for download by existing customers, or anyone who wants to try it out. New features include: Support for J2ME platforms, specifically CDC platforms including OJEC(this is in addition to our existing support for Java SE and SE Embedded) Per-application integration with Berkeley DB on Android Server-side support for Apache TomEE platform Adding support for Oracle Java Micro Edition Embedded Client (OJEC for short) is an important milestone for us; it enables Database Mobile Server to work with any of the incredibly wide array of devices that run J2ME. In particular, it enables management of  networks of embedded devices, AKA machine to machine (M2M) networks. As these types of networks become more common in areas like healthcare, automotive, and manufacturing, we're seeing demand for Database Mobile Server from new and different areas. This is in addition to our existing array of mobile device use cases. The Android integration feature with Berkeley DB represents the completion of phase I of our Android support plan, we now offer a full set of sync, device and app management features for that platform. Going forward, we plan to continue the dual-focus approach, supporting mobile platforms such as Android, and iOS (hint) on the one hand, and networks of embedded M2M devices on the other. In either case, Database Mobile Server continues to be the best way to connect data-driven applications to an Oracle backend.

    Read the article

  • Oracle SOA Governance EMEA Workshop for Partners & System Integrators: Nov 5-7th | Madrid, Spain

    - by Lionel Dubreuil
    The EMEA Fusion Middleware Product Management team is delighted to announce an exciting and a much-awaited workshop on our market-leading SOA Governance offering. Oracle SOA Governance solution is Oracle Fusion Middleware's strategic approach to governing SOA. Whether just embarking on an SOA program, or expanding from project or pilot to broader deployment, the Oracle SOA Governance solution closes the loop on measuring SOA success from project inception through to realization, and providing the proof of ROI on SOA. Would your prospects and customers like to: Align their SOA Vision and Execution Improve Decision Making Effectively Manage Business and Technology Change Enable Control Foster Enterprise-wide Collaboration Reduce Development Costs Track their SOA Investments and Returns Demonstrate business value and ROI of SOA This FREE hands-on workshop is dedicated to EMEA Partners & System Integrators (SIs). It'll be delivered by Oracle HQ Product Management and will primarily focus on : SOA Governance as a Strategy and Methodology Hands-on with Oracle Enterprise Repository (OER) and Oracle Service Registry (OSR) When, how and whom to position our SOA Governance offerings Our SOA Governance Rapid Start Service Hands-on sessions for the most popular customer use cases Seats are limited, book now - you cannot afford to miss this training! If you're interested please contact Yogesh Sontakke (yogesh.sontakke-AT-oracle-DOT-com)

    Read the article

  • Oracle SOA Governance EMEA Workshop for Partners & System Integrators: Nov 5-7th | Madrid, Spain

    - by Lionel Dubreuil
    The EMEA Fusion Middleware Product Management team is delighted to announce an exciting and a much-awaited workshop on our market-leading SOA Governance offering. Oracle SOA Governance solution is Oracle Fusion Middleware's strategic approach to governing SOA. Whether just embarking on an SOA program, or expanding from project or pilot to broader deployment, the Oracle SOA Governance solution closes the loop on measuring SOA success from project inception through to realization, and providing the proof of ROI on SOA. Would your prospects and customers like to: Align their SOA Vision and Execution Improve Decision Making Effectively Manage Business and Technology Change Enable Control Foster Enterprise-wide Collaboration Reduce Development Costs Track their SOA Investments and Returns Demonstrate business value and ROI of SOA This FREE hands-on workshop is dedicated to EMEA Partners & System Integrators (SIs). It'll be delivered by Oracle HQ Product Management and will primarily focus on : SOA Governance as a Strategy and Methodology Hands-on with Oracle Enterprise Repository (OER) and Oracle Service Registry (OSR) When, how and whom to position our SOA Governance offerings Our SOA Governance Rapid Start Service Hands-on sessions for the most popular customer use cases Seats are limited, book now - you cannot afford to miss this training! If you're interested please contact Yogesh Sontakke: [email protected].

    Read the article

  • computer shows up twice, connection unknown

    - by Thomas G. Seroogy
    added two computers to Ubuntu One. One machine is windows. The windows version seems to work fine, and I started syncing a folder by placing it into the Ubuntu One folder. All files and folders are visible when I go to my account online. On Ubuntu machine. I selected to sync the Download folder. Shortly thereafter, I realized that one folder exceeded my storage max. I tried to un-sync the folder, but Ubuntu One and the Stop Syncing This Folder were not visible in the menu. Per Ubuntu instructions, I removed my Ubuntu computer from the list of syncing computers. Per Ubuntu instructions, I re-added the Ubuntu computer. However, I find that two computers by the same name are added on both the desktop app and on the web. Plus, I the connection is "unknown." I have removed and re-added the computer several times with the same results. In all cases, I remove the computers using the Ubuntu One desktop app, then removing them from my account on the web, removing the Ubuntu One password, and restart the Ubuntu One app. Problem remains. Thanks in advance for any replies and help.

    Read the article

  • Accessing hardware via USB by proprietary windows software using Wine

    - by Carlos Eugenio Thompson Pinzón
    I have this proprietary software that access some hardware using USB. Over a year ago I tried to install it on a Ubuntu OS using Wine (the program is written for Windows). The UI seemed to work just fine but it had no access to the USB port. Back then I had to license a Windows copy in order to get the job done. Now, that Windows version we where using is deprecated and it is not longer available and available versions cost trice as much. So it is time to give Linux another try. How can I ensure that the USB is available for a Wine application? Neither the application nor the hardware install any driver, the app just pool all available USB drivers and make a handshake if it recognizes that the hardware is present. I want to minimize the test cases before abandoning Linux one more time. Update I've just tried again (with the hope an upgrade was made from last year), and it is not working. The proprietary windows app is not finding the hardware.

    Read the article

  • Functional testing in the verification

    - by user970696
    Yesterday my question How come verification does not include actual testing? created a lot of controversy, yet did not reveal the answer for related and very important question: does black box functional testing done by testers belong to verification or validation? ISO 12207:12208 here mentiones testing explicitly only as a validation activity, however, it speaks about validation of requirements of the intended use. For me its more high level, like UAT test cases written by business users ISO mentioned above does not mention any specific verification (7.2.4.3.2)except for Requirement verification, Design verification, Document and Code & Integration verification. The last two can be probably thought as unit and integrated testing. But where is then the regular testing done by testers at the end of the phase? The book I mentioned in the original question mentiones that verification is done by static techniques, yet on the V model graph it describes System testing against high level description as a verification, mentioning it includes all kinds of testing like functional, load etc. In the IEEE standard for V&V, you can read this: Even though the tests and evaluations are not part of the V&V processes, the techniques described in this standard may be useful in performing them. So that is different than in ISO, where validation mentiones testing as the activity. Not to mention a lot of contradicting information on the net. I would really appreciate a reference to e.g. a standard in the answer or explanation of what I missed in the ISO. For me, I am unable to tell where the testers work belong.

    Read the article

  • Current SPARC Architectures

    - by Darryl Gove
    Different generations of SPARC processors implement different architectures. The architecture that the compiler targets is controlled implicitly by the -xtarget flag and explicitly by the -arch flag. If an application targets a recent architecture, then the compiler gets to play with all the instructions that the new architecture provides. The downside is that the application won't work on older processors that don't have the new instructions. So for developer's there is a trade-off between performance and portability. The way we have solved this in the compiler is to assume a "generic" architecture, and we've made this the default behaviour of the compiler. The only flag that doesn't make this assumption is -fast which tells the compiler to assume that the build machine is also the deployment machine - so the compiler can use all the instructions that the build machine provides. The -xtarget=generic flag tells the compiler explicitly to use this generic model. We work hard on making generic code work well across all processors. So in most cases this is a very good choice. It is also of interest to know what processors support the various architectures. The following Venn diagram attempts to show this: A textual description is as follows: The T1 and T2 processors, in addition to most other SPARC processors that were shipped in the last 10+ years supported V9b, or sparcvis2. The SPARC64 processors from Fujitsu, used in the M-series machines, added support for the floating point multiply accumulate instruction in the sparcfmaf architecture. Support for this instruction also appeared in the T3 - this is called sparcvis3 Later SPARC64 processors added the integer multiply accumulate instruction, this architecture is sparcima. Finally the T4 includes support for both the integer and floating point multiply accumulate instructions in the sparc4 architecture. So the conclusion should be: Floating point multiply accumulate is supported in both the T-series and M-series machines, so it should be a relatively safe bet to start using it. The T4 is a very good machine to deploy to because it supports all the current instruction sets.

    Read the article

  • Multiple monitors showing same screen but different resolutions

    - by Luis Alvarado
    Is it possible to have 2 or more monitors showing the same screen, for example the same desktop but with different resolutions. Like the clone option in Nvidia or the mirror option using the Display settings in Ubuntu but instead of showing the same output with the same resolution, the both show the same output using a resolution that is native for each monitor connected. In my case if I have a netbook that has max resolution of 1360x768 and a TV that has 1280x1024, the would both show the same desktop but each with their own resolution that is compatible for each device. This would help in trying to find a resolution that works on both monitors and in cases like a mini netbook and a huge TV it would solve issues like having max 800x600 in one monitor and min 1024x768 in the other. In the case I tested I was using an HDMI cable but this question also involves VGA and any other connection. I have 3 tests scenarios for this: Scenario 1 - Laptop HP DV6000 (Intel Integrated Video) with 1360x760 connected to a Samsung LED 42 TV that has 1280x900. Scenario 2 - Laptop EEE with 1024x600 (Intel Integrated Video) connected to Sony LCD TV that supports 1280x900. Scenario 3 - Intel Desktop with Nvidia 440 GT with HDMI connected to Soneview 32' TV that supports 1920x1080 and VGA connected to an Epson Video Beam that supports 1280x1024 max. In this 3 scenarios I need to be able to show the same desktop and same views but on different resolutions for each output device. UPDATE: Tested with Xubuntu and the way it handles multiple monitors is precisely what I am asking. The ability to handle the resolution of different monitors showing the same thing.

    Read the article

  • Recommened design pattern to handle multiple compression algorithms for a class hierarchy

    - by sgorozco
    For all you OOD experts. What would be the recommended way to model the following scenario? I have a certain class hierarchy similar to the following one: class Base { ... } class Derived1 : Base { ... } class Derived2 : Base { ... } ... Next, I would like to implement different compression/decompression engines for this hierarchy. (I already have code for several strategies that best handle different cases, like file compression, network stream compression, legacy system compression, etc.) I would like the compression strategy to be pluggable and chosen at runtime, however I'm not sure how to handle the class hierarchy. Currently I have a tighly-coupled design that looks like this: interface ICompressor { byte[] Compress(Base instance); } class Strategy1Compressor : ICompressor { byte[] Compress(Base instance) { // Common compression guts for Base class ... // if( instance is Derived1 ) { // Compression guts for Derived1 class } if( instance is Derived2 ) { // Compression guts for Derived2 class } // Additional compression logic to handle other class derivations ... } } As it is, whenever I add a new derived class inheriting from Base, I would have to modify all compression strategies to take into account this new class. Is there a design pattern that allows me to decouple this, and allow me to easily introduce more classes to the Base hierarchy and/or additional compression strategies?

    Read the article

  • proxychained browser unable to open files

    - by Cocoro Cara
    Ubuntu 10.10, Midori 0.3.2 browser (same problem with Epiphany 2.30.2, Chrome 11.0.686.1 dev; haven't tried with Firefox as yet) Proxychains-3.1 installed, working fine. Here is the deal: when NOT proxychained, Midori or Epiphany can download and open a file (e.g. PDFs from a Google search) in Evince without problem. But when proxychained, neither browser can open PDF files. The message is, "file xxx downloaded". Then it tries to open the file, an indication appears in the status bar, the turning wheel appears, and then nothing. Evince doesn't open. Whats going on here? In both cases (with and without proxychains) files are downloaded to the /tmp folder. They have the same file permissions and ownership. Whats different when proxychained? Why can't the files be opened and why are there no error messages or notifications. Why the silent failure? I don't want to use FF or Chrome. Chrome does not follow my GTK2 customizations and FF is just too resource heavy. Please help.

    Read the article

  • Book: DevOps for Developers

    - by Tori Wieldt
    We all know development and operations often act like silos, with "Just throw it over the wall!" being the battle cry. Many organizations unwittingly contribute to gaps between teams, with management by (competing) objectives; a clash of Agile practices vs. more conservative approaches; and teams using different sets of tools, such as Nginx, OpenEJB, and Windows on developers' machines and Apache, Glassfish, and Linux on production machines. At best, you've got sub-optimal collaboration, at worst, you've got the Hatfields and the McCoys.  The book DevOps for Developers helps bridge the gap between development and operations by aligning incentives and sharing approaches for processes and tools. It introduces DevOps as a modern way of bringing development and operations together. It also means to broaden the usage of Agile practices to operations to foster collaboration and streamline the entire software delivery process in a holistic way. Some single aspects of DevOps may not be new, for example, you may have used the tool Puppet for years already, but with a new mindset ("my job is not just to code, it's to serve the customer in the best way possible") and a complete set of recipes, you'll be well on your way to success. DevOps for Developers also by provides real-world use cases (e.g., how to use Kanban or how to release software). It provides a way to be successful in the real development/operations world. DevOps for Developers is written my Michael Hutterman, Java Champion, and founder of the Cologne Java User Group. "With DevOps for Developers, developers can learn to apply patterns to improve collaboration between development and operations as well as recipes for processes and tools to streamline the delivery process," Hutterman explains.

    Read the article

  • ODI 11g – Faster Files

    - by David Allan
    Deep in the trenches of ODI development I raised my head above the parapet to read a few odds and ends and then think why don’t they know this? Such as this article here – in the past customers (see forum) were told to use a staging route which has a big overhead for large files. This KM is an example of the great extensibility capabilities of ODI, its quite simple, just a new KM that; improves the out of the box experience – just build the mapping and the appropriate KM is used improves out of the box performance for file to file data movement. This improvement for out of the box handling for File to File data integration cases (from the 11.1.1.5.2 companion CD and on) dramatically speeds up the file integration handling. In the past I had seem some consultants write perl versions of the file to file integration case, now Oracle ships this KM to fill the gap. You can find the documentation for the IKM here. The KM uses pure java to perform the integration, using java.io classes to read and write the file in a pipe – it uses java threading in order to super-charge the file processing, and can process several source files at once when the datastore's resource name contains a wildcard. This is a big step for regular file processing on the way to super-charging big data files using Hadoop – the KM works with the lightweight agent and regular filesystems. So in my design below transforming a bunch of files, by default the IKM File to File (Java) knowledge module was assigned. I pointed the KM at my JDK (since the KM generates and compiles java), and I also increased the thread count to 2, to take advantage of my 2 processors. For my illustration I transformed (can also filter if desired) and moved about 1.3Gb with 2 threads in 140 seconds (with a single thread it took 220 seconds) - by no means was this on any super computer by the way. The great thing here is that it worked well out of the box from the design to the execution without any funky configuration, plus, and a big plus it was much faster than before, So if you are doing any file to file transformations, check it out!

    Read the article

  • Dealing with an Idiot [closed]

    - by inspectorG4dget
    I'm a 4th year University Computer Science student, and I have this problem, that I don't seem to be able to find a straight answer to: As a 4th year computer science student, I spend more time in the computer lab on campus, than even my own home. This means that getting along with everyone else here is very important to me. In most cases, this is not an issue because my interactions with almost all the people here fall into one of the following categories: Let me help you, junior Hi fellow student in a course I'm taking, I'm having trouble with this assignment question. Can you give me a hint as to how you solved it? Hi fellow student in a course I'm taking, This is how I solved the problem that you're stuck on. Hope it helps Hi fellow student, I noticed that you're working on a project, using a library that I'm interested in. Can we setup a time so I can learn about this library from you? This model of interaction works very well for me. However, there is one fellow student, who manages to make my life hell beyond all of this (his name is not important, let's just call him "Sam"). He seems to be always (pardon my crass description) high and completely unwilling to contribute to a constructive, academic conversation. He's a pretty smart guy, but just comes across as (I hate to say something like this about a fellow student, but) an imbecile. He also has ignorant opinions on important topics, some of which pertain to my specialization (AI, NLP, etc), and when I try to explain to him why he's wrong, all he does is insult me and put me in a foul mood. I have tried ignoring him (sitting somewhere else in the lab, headphones, etc), but he seems to like doing this because he approaches me and no amount of "leave me alone" seems to do the trick. Can anyone please suggest to me how to deal with this man in a civil way? Thank you

    Read the article

  • Seeking solution for printing-reporting .NET

    - by Parhs
    I am developing an application that prints in separate threads in extreme cases about 20-25 pages per minute to various thermal printers. Currently templates for these are XAML xps documents. All printers have graphics drivers that support EMF/GDI printing. So GDI-EMF is done by operating system resulting in slower performance. Sending raw text for printing is another good solution but doesnt work always , because some clients have old chinese thermal printer that nobody support thus impossible to change codepage / emulation. So it doesnt work always. Also most computers running my software are low end ATOM CPU. So I am thinking to return to GDI, EMF printing and have both Text-Only reports and EMF reports. Another reason i want EMF is because here receipts are signed by Electronic Fiscal Memory device.Most of these dont do good job extracting text from XPS as they dont follow the standard but how windows convert GDI to XPS.Even with text-only mode some of them dont support all character encodings and are impossible to send paper cut command after the sign. I know that using a reporting engine would solve rendering problem but I dont want to buy one. All I want is to be able to show tabular data and insert an image and replaced text.I know there is StringTemplate that could do the generation of template but the problem is i should parse somehow the template and render it using GDI commands. Is there any other solution/approach for this ? Or is there anything ready ?

    Read the article

  • Portal And Content – Introduction (1 of 7)

    - by Stefan Krantz
    The coming post over the next two months will be included in a new series. The idea is to help the reader to understand how to enable a versatile and manageable portal. Each post will go through a specific use case or lifecycle group of events that a Content Driven Portal requires the development team to consider. The current planning is to deliver following subjects, each topic will be enclosed in a separate blog post. Introduction – Introduction to the series of posts and what to expect at the end of the series Components, part 1 – UCM, Site Studio and high level introduction to content templates Components, part 2 – Page Templates and  Navigation model Components, part 3 – Applied Customization Framework for Content Presenter Taskflows Scenario 1 – Enable a Portal for runtime administration Scenario 2 – Enable a Portal for Internationalization Scenario 3 – Enable a Portal for Content Workflows Background This post series has been issued to help customers, partners and consultants to understand the concept of a WebCenter Portal project where the main focus or a majority of the portal has content interaction. Today the most portal installations Oracle WebCenter Portal is involved in have a vast majority of content based pages. Many of the Portal projects have or will run into challenges, to mitigate these challenges the portal and content lifecycle has to be well designed. The coming posts will address the main components that should be involved when creating such scenarios; it will also go into details on the process by describing three solution scenarios. The aim with the scenarios is to give the reader a more hands on understanding of the concept of building and architecting a Content Driven Portal. The selected scenarios are selected based on the most common use cases that we have identified until today.

    Read the article

  • Library Organization in .NET

    - by Greg Ros
    I've written a .NET bitwise operations library as part of my projects (stuff ranging from get MSB set to some more complicated bitwise transformations) and I mean to release it as free software. I'm a bit confused about a design aspect of the library, though. Many of the methods/transformations in the library come with different endianness. A simple example is a getBitAt method that regards index 0 as the least significant bit, or the most significant bit, depending on the version used. In practice, I've found that using separate functions for different endianness results in much more comprehensible and reusable code than assuming all operations are little-endian or something. I'm really stumped regarding how best to package the library. Should I have methods that have LE and BE versions take an enum parameter in their signature, e.g. Endianness.Little, Endianness.Big? Should I have different static classes with identically named methods? such as MSB.GetBit and LSB.GetBit On a much wider note, is there a standard I could use in cases like this? Some guide? Is my design issue trivial? I have a perfectionist bent, and I sometimes get stuck on tricky design issues like this... Note: I've sort of realized I'm using endianness somewhat colloquially to refer to the order/place value of digital component parts (be they bits, bytes, or words) in a larger whole, in any setting. I'm not talking about machine-level endianness or serial transmission endianness. Just about place-value semantics in general. So there isn't a context of targeting different machines/transmission techniques or something.

    Read the article

  • Is it OK to use dynamic typing to reduce the amount of variables in scope?

    - by missingno
    Often, when I am initializing something I have to use a temporary variable, for example: file_str = "path/to/file" file_file = open(file) or regexp_parts = ['foo', 'bar'] regexp = new RegExp( regexp_parts.join('|') ) However, I like to reduce the scope my variables to the smallest scope possible so there is less places where they can be (mis-)used. For example, I try to use for(var i ...) in C++ so the loop variable is confined to the loop body. In these initialization cases, if I am using a dynamic language, I am then often tempted to reuse the same variable in order to prevent the initial (and now useless) value from being used latter in the function. file = "path/to/file" file = open(file) regexp = ['...', '...'] regexp = new RegExp( regexp.join('|') ) The idea is that by reducing the number of variables in scope I reduce the chances to misuse them. However this sometimes makes the variable names look a little weird, as in the first example, where "file" refers to a "filename". I think perhaps this would be a non issue if I could use non-nested scopes begin scope1 filename = ... begin scope2 file = open(filename) end scope1 //use file here //can't use filename on accident end scope2 but I can't think of any programming language that supports this. What rules of thumb should I use in this situation? When is it best to reuse the variable? When is it best to create an extra variable? What other ways do we solve this scope problem?

    Read the article

< Previous Page | 124 125 126 127 128 129 130 131 132 133 134 135  | Next Page >