Search Results

Search found 658 results on 27 pages for 'extreme'.

Page 12/27 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • Are Get-Set methods a violation of Encapsulation?

    - by Dipan Mehta
    In an Object oriented framework, one believes there must be strict encapsulation. Hence, internal variables are not to be exposed to outside applications. But in many codebases, we see tons of get/set methods which essentially open a formal window to modify internal variables that were originally intended to be strictly prohibited. Isn't it a clear violation of encapsulation? How broadly such a practice is seen and what to do about it? EDIT: I have seen some discussions where there are two opinions in extreme: on one hand people believe that because get/set interface is used to modify any parameter, it does qualifies not be violating encapsulation. On the other hand, there are people who believe it is does violate. Here is my point. Take a case of UDP server, with methods - get_URL(), set_URL(). The URL (to listen to) property is quite a parameter that application needs to be supplied and modified. However, in the same case, if the property like get_byte_buffer_length() and set_byte_buffer_length(), clearly points to values which are quite internal. Won't it imply that it does violate the encapsulation? In fact, even get_byte_buffer_length() which otherwise doesn't modify the object, still misses the point of encapsulation, because, certainly there is an App which knows i have an internal buffer! Tomorrow, if the internal buffer is replaced by something like a *packet_list* the method goes dysfunctional. Is there a universal yes/no towards get set method? Is there any strong guideline that tell programmers (specially the junior ones) as to when does it violate encapsulation and when does it not?

    Read the article

  • HOUG Konferencia 2012

    - by user645740
    2012. március 26-án, azaz ma kezdodik Egerszalókon a magyarországi Oracle felhasználók rendszeres konferenciája, a HOUG Konferencia 2012, rengeteg érdekes eloadással, kerekasztal beszélgetéssel, networkinggel. A helyszínen még lehet személyesen regisztrálni, a HOUG tagoknak kedvezményes a részvétel. www.houg.hu Igen nagy az érdeklodés! A szálloda és néhány közeli szálláshely már megtelt. A környéken szerencsére még sok szálloda és fogadó található, tehát aki akar még mindig talál szobát. A programról, ami metekintheto, letöltheto: http://www.houg.hu/contents/files/houg2012_program_vegleges.pdf Kedden, március 27-én 10:15-11h John Abel fog izgalmas eloadást tartani az Oracle intelligens tervezett célrendszerekrol - "Extreme Performance with Oracle Engineered Systems", John Abel, Business Development director - EMEA, köztük az Oracle Exadata Database Machine adatázisgéprol is. Szintén kedden délután kerekasztal beszélgetésen fogjuk megtárgyalni ezen rendszerek magyarországi alkalmazásának lehetoségeit, igényeket: Exadata, SPARC SuperCluster, Exalogic, Exalytics, stb. 16:35-17:40, "Célrendszerek alkalmazhatósága a mindennapokban", a kerekasztal beszélgetés moderátora Fodor Endre. Kedd délután eloadást tartok az 13:45-14:35, "Oracle Exadata Database Machine: a világ leggyorsabb adatbázisgépe" címmel. A kedd délutáni Adatgyujtéstol az elemzésig szekció az üzleti intelligencia és adattárház ügyfél tapasztalatokról és technológiákról szól. A szekció vezetoi: Arató Bence, HOUG és Fekete Zoltán (én). BMW tesztvezetés, színházi és koncertprogram színesíti a rendezvényt.

    Read the article

  • Get a A Little Smarter . . .

    - by Michelle Kimihira
    Author, Rimi Bewtra, Senior Director, Product Marketing, Oracle Fusion Middleware   This month I had a chance to gain some valuable insights on Oracle’s latest product innovations and customer successes after my conversation with Vice President of Product Management of Oracle Fusion Middleware, Amit Zavery.  In this 10 minute podcast, Amit was able to quickly outline a few of Oracle recent major announcements including: ·         Oracle Exalogic Elastic Cloud – our flagship engineered system for running business applications – provides extreme performance, reliability and scalability while delivering lower total cost of ownership, reduced risk, higher user productivity and one-stop support. ·         Oracle Application Development Framework (ADF) Mobile, is a HTML5 and Java-based framework that enables developers to easily build, deploy, and extend enterprise hybrid mobile applications across multiple mobile operating systems, including iOS and Android, from a single code base. And did you know Oracle has 125,000 Fusion Middleware customers? Amit shared a few of his favorite customer success stories and gave me latest view from the leading Industry Analysts. If you have 10 minutes, you too can get a little smarter … take a listen and let’s catch up soon. Additional Information Product Information on Oracle.com: Oracle Fusion Middleware Follow us on Twitter and Facebook Subscribe to our regular Fusion Middleware Newsletter

    Read the article

  • More BI Showcase Events - Greensboro, NC & Tampa, FL

    - by Rob Reynolds
    As the momentum around OBIEE 11g continues, we are providing more opportunities to get a hands on view of the new technology via our Oracle Business Intelligence Showcases. Next week we will have Showcases in Greensboro, NC and Tampa, FL. I will be presenting at both, so please stop by and say hello, while learning about the latest in Oracle BI & DW technology. Pre-registration is required. You can register for the events at the links below: Greensboro, NC - Tuesday December 7, 2011 Tampa, FL - Wednesday, December 8, 2011 Session Agenda: Agenda 9:00 a.m. – 10:00 a.m. Registration and Welcome 10:00 a.m. – 11:00 a.m. Session Keynote: Oracle’s New Generation of Business Intelligence Solutions and Innovations 11:00 a.m. – 12:00 noon Session 1 Track 1Oracle Business Intelligence Enterprise Edition 11g: End User Experience Track 2Management Reporting with Oracle Essbase 12:00 noon – 1:00 p.m. Networking Lunch 1:00 p.m. – 2:00 p.m. Session 2 Track 1Oracle Business Intelligence Enterprise Edition 11g for Power Users, Developers, and Administrators Track 2Oracle BI Applications: The Value of Cross-Functional BI Break to change rooms 2:00 p.m.– 3:00 p.m. Session 3 Track 1 Extreme Performance Data Warehousing Track 2Master Data Management: The Single Source of Truth for Real Time Decisions 3:15 p.m. Wrap-Up and Raffle Prize

    Read the article

  • Unlock More Value: Oracle Platinum Services at Oracle OpenWorld

    - by Oracle OpenWorld Blog Team
    In a bold move to provide even more value to customers who adopt the extreme performance of Oracle Exalogic Elastic Cloud, Oracle Exadata, and Oracle SPARC SuperCluster, Oracle recently launched a set of enhanced services that help IT managers decrease the cost and complexity of supporting their IT environments: Oracle Platinum Services. Learn more by attending the Oracle Platinum Services: Unlock More Value with Advanced Support session at Oracle OpenWorld. In this session, Oracle shares how to achieve maximum performance and lower total cost of ownership through certified configurations for Oracle engineered systems and Oracle Platinum Services. Hear about the industry-leading Oracle Platinum Services offering and tools already used by Oracle customers, including remote fault monitoring, faster response times and patching services.Vincent Biddlecombe, chief technology officer of Transplace, a third-party logistics provider, is seeing results already. He says “The Platinum Services offering has been a great addition to Oracle Premier Support. This level of support is unique in my experience. We saw results very quickly. Our experience has exceeded my expectations.” The patching services have enabled Transplace to stay up to date on the latest improvements.  According to Biddlecombe, “We've gone from being eight patches behind to completely up to date, and I'm extremely happy.”  Visit us on Monday, October 1 at 12:15 p.m. and become familiar with industry-leading Oracle Platinum Services. For more information on Oracle Customer Support Services sessions and events, go to Oracle Customer Support Services.

    Read the article

  • Table Variables: an empirical approach.

    - by Phil Factor
    It isn’t entirely a pleasant experience to publish an article only to have it described on Twitter as ‘Horrible’, and to have it criticized on the MVP forum. When this happened to me in the aftermath of publishing my article on Temporary tables recently, I was taken aback, because these critics were experts whose views I respect. What was my crime? It was, I think, to suggest that, despite the obvious quirks, it was best to use Table Variables as a first choice, and to use local Temporary Tables if you hit problems due to these quirks, or if you were doing complex joins using a large number of rows. What are these quirks? Well, table variables have advantages if they are used sensibly, but this requires some awareness by the developer about the potential hazards and how to avoid them. You can be hit by a badly-performing join involving a table variable. Table Variables are a compromise, and this compromise doesn’t always work out well. Explicit indexes aren’t allowed on Table Variables, so one cannot use covering indexes or non-unique indexes. The query optimizer has to make assumptions about the data rather than using column distribution statistics when a table variable is involved in a join, because there aren’t any column-based distribution statistics on a table variable. It assumes a reasonably even distribution of data, and is likely to have little idea of the number of rows in the table variables that are involved in queries. However complex the heuristics that are used might be in determining the best way of executing a SQL query, and they most certainly are, the Query Optimizer is likely to fail occasionally with table variables, under certain circumstances, and produce a Query Execution Plan that is frightful. The experienced developer or DBA will be on the lookout for this sort of problem. In this blog, I’ll be expanding on some of the tests I used when writing my article to illustrate the quirks, and include a subsequent example supplied by Kevin Boles. A simplified example. We’ll start out by illustrating a simple example that shows some of these characteristics. We’ll create two tables filled with random numbers and then see how many matches we get between the two tables. We’ll forget indexes altogether for this example, and use heaps. We’ll try the same Join with two table variables, two table variables with OPTION (RECOMPILE) in the JOIN clause, and with two temporary tables. It is all a bit jerky because of the granularity of the timing that isn’t actually happening at the millisecond level (I used DATETIME). However, you’ll see that the table variable is outperforming the local temporary table up to 10,000 rows. Actually, even without a use of the OPTION (RECOMPILE) hint, it is doing well. What happens when your table size increases? The table variable is, from around 30,000 rows, locked into a very bad execution plan unless you use OPTION (RECOMPILE) to provide the Query Analyser with a decent estimation of the size of the table. However, if it has the OPTION (RECOMPILE), then it is smokin’. Well, up to 120,000 rows, at least. It is performing better than a Temporary table, and in a good linear fashion. What about mixed table joins, where you are joining a temporary table to a table variable? You’d probably expect that the query analyzer would throw up its hands and produce a bad execution plan as if it were a table variable. After all, it knows nothing about the statistics in one of the tables so how could it do any better? Well, it behaves as if it were doing a recompile. And an explicit recompile adds no value at all. (we just go up to 45000 rows since we know the bigger picture now)   Now, if you were new to this, you might be tempted to start drawing conclusions. Beware! We’re dealing with a very complex beast: the Query Optimizer. It can come up with surprises What if we change the query very slightly to insert the results into a Table Variable? We change nothing else and just measure the execution time of the statement as before. Suddenly, the table variable isn’t looking so much better, even taking into account the time involved in doing the table insert. OK, if you haven’t used OPTION (RECOMPILE) then you’re toast. Otherwise, there isn’t much in it between the Table variable and the temporary table. The table variable is faster up to 8000 rows and then not much in it up to 100,000 rows. Past the 8000 row mark, we’ve lost the advantage of the table variable’s speed. Any general rule you may be formulating has just gone for a walk. What we can conclude from this experiment is that if you join two table variables, and can’t use constraints, you’re going to need that Option (RECOMPILE) hint. Count Dracula and the Horror Join. These tables of integers provide a rather unreal example, so let’s try a rather different example, and get stuck into some implicit indexing, by using constraints. What unusual words are contained in the book ‘Dracula’ by Bram Stoker? Here we get a table of all the common words in the English language (60,387 of them) and put them in a table. We put them in a Table Variable with the word as a primary key, a Table Variable Heap and a Table Variable with a primary key. We then take all the distinct words used in the book ‘Dracula’ (7,558 of them). We then create a table variable and insert into it all those uncommon words that are in ‘Dracula’. i.e. all the words in Dracula that aren’t matched in the list of common words. To do this we use a left outer join, where the right-hand value is null. The results show a huge variation, between the sublime and the gorblimey. If both tables contain a Primary Key on the columns we join on, and both are Table Variables, it took 33 Ms. If one table contains a Primary Key, and the other is a heap, and both are Table Variables, it took 46 Ms. If both Table Variables use a unique constraint, then the query takes 36 Ms. If neither table contains a Primary Key and both are Table Variables, it took 116383 Ms. Yes, nearly two minutes!! If both tables contain a Primary Key, one is a Table Variables and the other is a temporary table, it took 113 Ms. If one table contains a Primary Key, and both are Temporary Tables, it took 56 Ms.If both tables are temporary tables and both have primary keys, it took 46 Ms. Here we see table variables which are joined on their primary key again enjoying a  slight performance advantage over temporary tables. Where both tables are table variables and both are heaps, the query suddenly takes nearly two minutes! So what if you have two heaps and you use option Recompile? If you take the rogue query and add the hint, then suddenly, the query drops its time down to 76 Ms. If you add unique indexes, then you've done even better, down to half that time. Here are the text execution plans.So where have we got to? Without drilling down into the minutiae of the execution plans we can begin to create a hypothesis. If you are using table variables, and your tables are relatively small, they are faster than temporary tables, but as the number of rows increases you need to do one of two things: either you need to have a primary key on the column you are using to join on, or else you need to use option (RECOMPILE) If you try to execute a query that is a join, and both tables are table variable heaps, you are asking for trouble, well- slow queries, unless you give the table hint once the number of rows has risen past a point (30,000 in our first example, but this varies considerably according to context). Kevin’s Skew In describing the table-size, I used the term ‘relatively small’. Kevin Boles produced an interesting case where a single-row table variable produces a very poor execution plan when joined to a very, very skewed table. In the original, pasted into my article as a comment, a column consisted of 100000 rows in which the key column was one number (1) . To this was added eight rows with sequential numbers up to 9. When this was joined to a single-tow Table Variable with a key of 2 it produced a bad plan. This problem is unlikely to occur in real usage, and the Query Optimiser team probably never set up a test for it. Actually, the skew can be slightly less extreme than Kevin made it. The following test showed that once the table had 54 sequential rows in the table, then it adopted exactly the same execution plan as for the temporary table and then all was well. Undeniably, real data does occasionally cause problems to the performance of joins in Table Variables due to the extreme skew of the distribution. We've all experienced Perfectly Poisonous Table Variables in real live data. As in Kevin’s example, indexes merely make matters worse, and the OPTION (RECOMPILE) trick does nothing to help. In this case, there is no option but to use a temporary table. However, one has to note that once the slight de-skew had taken place, then the plans were identical across a huge range. Conclusions Where you need to hold intermediate results as part of a process, Table Variables offer a good alternative to temporary tables when used wisely. They can perform faster than a temporary table when the number of rows is not great. For some processing with huge tables, they can perform well when only a clustered index is required, and when the nature of the processing makes an index seek very effective. Table Variables are scoped to the batch or procedure and are unlikely to hang about in the TempDB when they are no longer required. They require no explicit cleanup. Where the number of rows in the table is moderate, you can even use them in joins as ‘Heaps’, unindexed. Beware, however, since, as the number of rows increase, joins on Table Variable heaps can easily become saddled by very poor execution plans, and this must be cured either by adding constraints (UNIQUE or PRIMARY KEY) or by adding the OPTION (RECOMPILE) hint if this is impossible. Occasionally, the way that the data is distributed prevents the efficient use of Table Variables, and this will require using a temporary table instead. Tables Variables require some awareness by the developer about the potential hazards and how to avoid them. If you are not prepared to do any performance monitoring of your code or fine-tuning, and just want to pummel out stuff that ‘just runs’ without considering namby-pamby stuff such as indexes, then stick to Temporary tables. If you are likely to slosh about large numbers of rows in temporary tables without considering the niceties of processing just what is required and no more, then temporary tables provide a safer and less fragile means-to-an-end for you.

    Read the article

  • Huge procedurally generated 'wilderness' worlds

    - by The Communist Duck
    Hi. I'm sure you all know of games like Dwarf Fortress - massive, procedural generated wilderness and land. Something like this, taken from this very useful article. However, I was wondering how I could apply this to a much larger scale; the scale of Minecraft comes to mind (isn't that something like 8x the size of the Earth's surface?). Pseudo-infinite, I think the best term would be. The article talks about fractal perlin noise. I am no way an expert on it, but I get the general idea (it's some kind of randomly generated noise which is semi-coherent, so not just random pixel values). I could just define regions X by X in size, add some region loading type stuff, and have one bit of noise generating a region. But this would result in just huge amounts of islands. On the other extreme, I don't think I can really generate a supermassive sheet of perlin noise. And it would just be one big island, I think. I am pretty sure Perlin noise, or some noise, would be the answer in some way. I mean, the map is really nice looking. And you could replace the ascii with tiles, and get something very nice looking. Anyone have any ideas? Thanks. :D -TheCommieDuck

    Read the article

  • Operation System Clone from a Motherboard to Another

    - by Devian
    Lately we had an idea with my Family of building a Super Computer from Scratch. So while we were planning on building our setup, one idea came to my head that it seems possible but i want also your opinions. Lets say that we have 2 ATX Motherboards and 1 MicroATX. Motherboard Setup: 1x ASUS Rampage Extreme Black Edition 1x Intel Core i7 4960x 4x GTX Titan 8x 8GB 1866 Ram Motherboard Setup: 1x SuperMicro X9DRG-QF 2x Intel Xeon E7-8890V2 1x nVIDIA QUADRO K6000 4x nVIDIA Tesla K40 128 GB 1866 Ram And imagine a Solid State Drive with a Switch connected to both of the MotherBoards Can i edit & copy all the data of The first motherboard's RAM to The Other's to be able to continue operating my current Operation System after switching the SSD to the Second Motherboard, from the Second MotherBoard and vice versa? Les say my "Switch Application" modifies everything the Kernel needs to believe nothing happend and continuing its operation from the same point the first motherboard stopped. (Changes on the Device List, CPU Cores, Drivers... etc)

    Read the article

  • Steps to install Windows 7 64bit on RAID 0 (striping)?

    - by marco.ragogna
    I will receive in some days 2x500 GB hard disks (ST3500418AS) and an ASRock 890GX Extreme 3. My idea is to install onto it Windows 7 64-bit in RAID0 configuration (striping). I wondering which steps should I follow, due to the fact I never did it before. Should I install Windows 7 on a single disk and apply the RAID0 later, or should I perform some step through BIOS first and install then Windows 7? If you can, please list me all necessary steps I should follow. Thank you in advance, Marco

    Read the article

  • Windows 7 Service Pack 1 Install Failing

    - by Cu Jimmy
    I am having real trouble updating one of my machines to Win7 Service Pack 1. Its an Asus P5K board with an X9650 Quad Core Extreme chip, which is a less common type of chip. Was wondering if anyone has had issues with this kind of kit or if it may just be the motherboard is a little gone. The error message I get back from Microsoft Update is WindowsUpdate_80091007 and WindowsUpdate_dt000 which are fairly generic errors. Tried installing with the sp1 file based installer and got nowhere. The actual installer crashes with an SP Installer has crashed Its a fresh install on a fully scrubbed drive (an Intel 120GB SSD).

    Read the article

  • SQL Query for Determining SharePoint ACL Sizes

    - by Damon Armstrong
    When a SharePoint Access Control List (ACL) size exceeds more than 64kb for a particular URL, the contents under that URL become unsearchable due to limitations in the SharePoint search engine.  The error most often seen is The Parameter is Incorrect which really helps to pinpoint the problem (its difficult to convey extreme sarcasm here, please note that it is intended).  Exceeding this limit is not unheard of – it can happen when users brute force security into working by continually overriding inherited permissions and assigning user-level access to securable objects. Once you have this issue, determining where you need to focus to fix the problem can be difficult.  Fortunately, there is a query that you can run on a content database that can help identify the issue: SELECT [SiteId],      MIN([ScopeUrl]) AS URL,      SUM(DATALENGTH([Acl]))/1024 as AclSizeKB,      COUNT(*) AS AclEntries FROM [Perms] (NOLOCK) GROUP BY siteid ORDER BY AclSizeKB DESC This query results in a list of ACL sizes and entry counts on a site-by-site basis.  You can also remove grouping to see a more granular breakdown: SELECT [ScopeUrl] AS URL,       SUM(DATALENGTH([Acl]))/1024 as AclSizeKB,      COUNT(*) AS AclEntries FROM [Perms] (NOLOCK) GROUP BY ScopeUrl ORDER BY AclSizeKB DESC

    Read the article

  • Set Up Anti-Brick Protection to Safeguard and Supercharge Your Wii

    - by Jason Fitzpatrick
    We’ve shown you how to hack your Wii for homebrew software, emulators, and DVD playback, now it’s time to safeguard your Wii against bricking and fix some annoyances—like that stupid “Press A” health screen. The thing about console modding and jailbreaking—save for the rare company like Amazon that doesn’t seem to care—is companies will play a game of cat and mouse to try and knock modded console out of commission, undo your awesome mods, or even brick your device. Although extreme moves like bricktacular-updates are rare once you modify your device you have to be vigilante in protecting it against updates that could hurt your sweet setup. Today we’re going to walk you through hardening your Wii and giving it the best brick protection available Latest Features How-To Geek ETC The 50 Best Registry Hacks that Make Windows Better The How-To Geek Holiday Gift Guide (Geeky Stuff We Like) LCD? LED? Plasma? The How-To Geek Guide to HDTV Technology The How-To Geek Guide to Learning Photoshop, Part 8: Filters Improve Digital Photography by Calibrating Your Monitor Our Favorite Tech: What We’re Thankful For at How-To Geek Snowy Christmas House Personas Theme for Firefox The Mystic Underground Tunnel Wallpaper Ubunchu! – The Ubuntu Manga Available in Multiple Languages Breathe New Life into Your PlayStation 2 Peripherals by Hooking Them Up to Your Computer Move the Window Control Buttons to the Left Side in Windows Fun and Colorful Firefox Theme for Windows 7

    Read the article

  • Programmer friendly non-voxel art styles?

    - by Overv
    Like many other programmers I've always wanted to make a game, but simply lack the skills to do any production quality graphics. I am however sure that I want to do the models and textures myself, because I need a lot of different objects and I am sure I wouldn't be able to find good matching models on 3D sites. That means I'll have to pick an art style that is "simple", programmer friendly. An extreme example of this is of course Minecraft, but I don't want to go that basic. I'm absolutely against creating a voxel game. What kind of art styles are out there that are relatively simple, i.e. things made out of basic shapes and textures, but are still good enough to form a believable and detailed world? An example of what I mean is wind waker. The objects are formed of relatively simples shapes, but still provide enough detail to create a nice, living world. The environment my game is set in is a city environment. What I'm really asking for here are good examples of "simple" art styles applied in practice, so I can choose one that fits my skills.

    Read the article

  • How to diagnose very slow pagefile

    - by svick
    Quite often, one of the applications I use freezes (“does not respond”) for a while, in extreme cases for few minutes. This happens especially when when switching apps. During this time, the HDD light flashes constantly and perfmon show that HDD is used 100% of the time (OTOH, CPU isn't) and that pagefile is being read (which is to be expected when switching apps), but at a very slow rate. When I sort the disk table in perfmon by read or write, the file read and wrote the most is the pagefile, but it's still quite low rate (I don't remember the numbers). How can I diagnose what's causing this? I use Windows Vista, and the computer is quite ordinary two years old laptop.

    Read the article

  • How do graphics programmers deal with rendering vertices that don't change the image?

    - by canisrufus
    So, the title is a little awkward. I'll give some background, and then ask my question. Background: I work as a web GIS application developer, but in my spare time I've been playing with map rendering and improving data interchange formats. I work only in 2D space. One interesting issue I've encountered is that when you're rendering a polygon at a small scale (zoomed way out), many of the vertices are redundant. An extreme case would be that you have a polygon with 500,000 vertices that only takes up a single pixel. If you're sending this data to the browser, it would make sense to omit ~499,999 of those vertices. One way we achieve that is by rendering an image on a server and and sending it as a PNG: voila, it's a point. Sometimes, though, we want data sent to the browser where it can be rendered with SVG (or canvas, or webgl) so that it can be interactive. The problem: It turns out that, using modern geographic data sets, it's very easy to overload SVG's rendering abilities. In an effort to cope with those limitations, I'm trying to figure out how to visually losslessly reduce a data set for a given scale and map extent (and, if necessary, for a known map pixel width and height). I got a great reduction in data size just using the Douglas-Peucker algorithm, and I believe I was able to get it to keep the polygons true to within one pixel. Unfortunately, Douglas-Peucker doesn't preserve topology, so it changed how borders between polygons got rendered. I couldn't readily find other algorithms to try out and adapt to the purpose, but I don't have much CS/algorithm background and might not recognize them if I saw them.

    Read the article

  • Code maintenance: keeping a bad pattern when extending new code for being consistent or not ?

    - by Guillaume
    I have to extend an existing module of a project. I don't like the way it has been done (lots of anti-pattern involved, like copy/pasted code). I don't want to perform a complete refactor. Should I: create new methods using existing convention, even if I feel it wrong, to avoid confusion for the next maintainer and being consistent with the code base? or try to use what I feel better even if it is introducing another pattern in the code ? Precison edited after first answers: The existing code is not a mess. It is easy to follow and understand. BUT it is introducing lots of boilerplate code that can be avoided with good design (resulting code might become harder to follow then). In my current case it's a good old JDBC (spring template inboard) DAO module, but I have already encounter this dilemma and I'm seeking for other dev feedback. I don't want to refactor because I don't have time. And even with time it will be hard to justify that a whole perfectly working module needs refactoring. Refactoring cost will be heavier than its benefits. Remember: code is not messy or over-complex. I can not extract few methods there and introduce an abstract class here. It is more a flaw in the design (result of extreme 'Keep It Stupid Simple' I think) So the question can also be asked like that: You, as developer, do you prefer to maintain easy stupid boring code OR to have some helpers that will do the stupid boring code at your place ? Downside of the last possibility being that you'll have to learn some stuff and maybe you will have to maintain the easy stupid boring code too until a full refactoring is done)

    Read the article

  • Wall jacks to patch panel?

    - by rj454me
    OK, I'm by no means a seasoned networking pro and I had no say so in the design of our current server room which is in dire need of an extreme makeover. Basically, in our server room we have 12 wall plates with 4 RJ-45 ports on each - 48 total RJ-45 ports. From these 48 ports is a spaghetti bowl of network cables feeding our servers located in a rack - there is no patch panel currently, just straight from the wall jack to each server. What I was wondering is, is it feasible to mount a 48 port patch panel in our server rack and feed into this patch panel from the wall jacks (of course nicely routing this cable through some new cable trays)? We really don't have the funds to mount the patch panel and have it fed directly from the switches in the telcom closet which is several hundred feet away. Current: Switch (Telcom Closet) - Wall Jacks - Servers Proposed: Switch (Telcom Closet) - Wall Jacks - Patch Panel - Servers

    Read the article

  • 12.10 visual performance using nvidia driver

    - by user100485
    My fresh ubuntu 12.10 install is slow, not something extreme but dragging windows, switching workspaces and things like that are just slow and look horrible. it feels like the fps is dropping in a game. Doing some photoshop work in windows was even a relief! This effect gets worse if I connect my external monitor. My system is an intel pentium dual core T4500 with 4gb memory and a GeForce 8200M G/integrated/SSE2 graphics chip. Nothing fancy but should be able to run ok. My "experience" in ubuntu is set to standard. (MSI cr500 laptop) I've installed the nvidia drivers, tried current and experimental and the experimental drivers seem to perform a bit better but overall bad anyway. I set the mode to adaptive in the nvidia-settings tool and it goes to maximum setting directly and doesn't come back. Using htop I found out that compiz or the X server always use a few percent of my cpu, more than I think it should and the time consumed is 5:18 for compiz, 4:33 for /usr/bin/X and 2:41 for google chrome(about 30 tabs open so not too strange I think.) What can I do to increase the visual performance cause this makes me not want to use ubuntu in public!

    Read the article

  • Oracle OpenWorld Kicks Off Today Delivering A Full Week of Insight, Education, and Unique Experiences

    - by Jeri Kelley
    San Francisco has been transformed into a sea of red and more than 50,000 attendees from 140 countries will converge in the Bay Area for a week of education and insight into Oracle's strategy and roadmap on today’s leading technology initiatives, including engineered systems, cloud computing, business analytics and big data, and customer experience.  Tonight kicked off with Oracle CEO Larry Ellison discussing how Oracle is taking a fundamentally different approach to delivering technology that is engineered to work together to give customers extreme performance, simplicity, and cost savings.  The jam-packed week continues with: More than 2,500 educational sessions Nearly 3,500 customer and partner speakers from marquee brands sharing how they are using Oracle technology to power their businesses Over 400 Oracle product demonstrations in the DEMOgrounds – make sure to stop by to see the latest demonstrations for our Customer Experience solutions including Oracle Commerce, Oracle RightNow, Oracle WebCenter, Oracle Fusion CRM, Oracle Social Relationship Management, and more. With more educational content than ever before, OpenWorld also expands to include six sub-conferences within the main OpenWorld umbrella including the Customer Experience Summit @ OpenWorld which runs October 3rd-5th.  All of this education and insight comes with some fun as well.  OpenWorld has become an exciting destination with new experiences unveiled each year including the debut of the first annual Oracle OpenWorld Music Festival, featuring some of today’s hottest acts, emerging bands and DJs over five nights playing at locations throughout San Francisco. Our Customer Experience team will be blogging and tweeting all week to keep you up-to-date so be sure to subscribe to our Customer Experience blog and follow us on Twitter and Facebook: @OracleCX @OracleCommerce @OracleCRM Facebook.com/OracleCustomerExperience If you are at OpenWorld, we hope you have a great week and get the knowledge you need to take your Oracle applications to the next level.  And, if you were not able to make it this year, be sure to tune into the sessions that are broadcast live online. 

    Read the article

  • Bay Area JUG Roundup 2010 - A Good Time for All

    - by Justin Kestelyn
    The first Bay Area JUG Roundup (#roundup10) convened at Oracle HQ on Wednesday evening, in the palatial surroundings of the Oracle Conference Center. (Yes, there will be more!) A couple hundred people were there, I'd say. More came out of this meetup than a bunch of new contacts and some mild indigestion (or even a mild hangover): - We (meaning, Oracle) announced the opening of the eight annual Duke's Choice Awards. As described on the Web page, "The awards celebrate extreme innovation in the world of Java technology and are granted to the best and most innovative projects using the Java platform." Entries will be accepted through July 1, with winners announced at JavaOne 2010. - Even more exciting, we offered a sneak preview of the Java Road Trip, a cross-country, 20-stop bus tour this Summer involving one rock-star bus, one full-time blogger/videographer, a whole bunch of Java demos and speakers, and lots of beer and prizes. Stay tuned for more info about this. - Sonya Barry, Java.net community manager, announced the beta.java.net project - which will be the end result of the java.net migration to a Kenai back-end and retooled social/community layer (already in progress). Sonya also announced that Maven support for Java.net projects is imminent, with just a contract to be signed in the next couple of weeks. Finally, we were all treated to a typically hilarious Java Posse appearance. Arun Gupta has posted photos as well as meetup slideware at his blog. And as soon as the video replay (thanks, Steve Chin) and Java Posse podcasts are available, I'll post links to those here too.

    Read the article

  • Should my program "be lenient" in what it accepts and "discard faulty input silently"?

    - by romkyns
    I was under the impression that by now everyone agrees this maxim was a mistake. But I recently saw this answer which has a "be lenient" comment upvoted 137 times (as of today). In my opinion, the leniency in what browsers accept was the direct cause of the utter mess that HTML and some other web standards were a few years ago, and have only recently begun to properly crystallize out of that mess. The way I see it, being lenient in what you accept will lead to this. The second part of the maxim is "discard faulty input silently, without returning an error message unless this is required by the specification", and this feels borderline offensive. Any programmer who has banged their head on the wall when something fails silently will know what I mean. So, am I completely wrong about this? Should my program be lenient in what it accepts and swallow errors silently? Or am I mis-interpreting what this is supposed to mean? Taken to the extreme, if Excel followed this maxim and I gave it an exe file to open, it would just show a blank spreadsheet without even mentioning that anything went wrong. Is this really a good principle to follow?

    Read the article

  • LCD problems: brightness / contrast blown out, can't bring up menu

    - by davr
    Yesterday, when I turned on my LCD monitor, the colors were very very bright. It looks as if someone had gone into the monitor on-screen-display and turned up brightness & contrast all the way to the max. Colors were washed out, and it's almost painful to look at. So I tried to go into the menu and see if the settings were wrong somehow...and I found that I cannot bring up the menu. All the buttons on the front of the monitor now do nothing, except the power button. I was able to go into my graphics card settings, and turn the brightness way down to compensate, so it's at least usable for now, but it's not quite right, there is definitely some loss of color resolution or something from the two extreme settings canceling each other out. Any ideas what kind of failure in the LCD might cause this? According to the control panel, the model number is: KL2490DW-D It's a 24" 1920x1200 with the brand "synaps".

    Read the article

  • Whats the greatest most impressive programing feat you ever witnessed? [closed]

    - by David Reis
    Everyone knows of the old adage that the best programmers can be orders of magnitude better than the average. I've personally seen good code and programmers, but never something so absurd. So the questions is, what is the most impressive feat of programming you ever witnessed or heard of? You can define impressive by: The scope of the task at hand e.g. John single handedly developed the framework for his company, a work comparable in scope to what the other 200 employed were doing combined. Speed e.g. Stu programmed an entire real time multi-tasking app OS on an weekened including its own C compiler and shell command line tools Complexity e.g. Jane rearchitected our entire 10 millon LOC app to work in a cluster of servers. And she did it in an afternoon. Quality e.g. Charles's code had a rate of defects per LOC 100 times lesser than the company average. Furthermore he code was clean and understandable by all. Obviously, the more of these characteristics combined, and the more extreme each of them, the more impressive is the feat. So, let me have it. What's the most absurd feat you can recount? Please provide as much detail as possible and try to avoid urban legends or exaggerations. Post only what you can actually vouch for. Bonus questions: Was the herculean task a one-of, or did the individual regularly amazed people? How do you explain such impressive performance? How was the programmer recognized for such awesome work?

    Read the article

  • ArchBeat Link-o-Rama for 2012-06-01

    - by Bob Rhubart
    Complexity of Social Computing - Is it a Consideration for EA's? | Pat Shepherd blogs.oracle.com Pat Shepherd asks, "Does Enterprise Architecture need to consider Social Computing in its scope?" Who should own the Enterprise Architecture? | Michael Glas blogs.oracle.com "Instead of looking at just who owns the architecture," suggests Michael Glas, "think about what the person/role/organization should do." The Application Architecture Domain | Michael Glas blogs.oracle.com Michael Glas asks—and answers: "As an Enterprise Architect, what do I need to consider when looking at/defining/designing the Application Architecture Domain?" CAP Twelve Years Later: How the "Rules" Have Changed | Eric Brewer www.infoq.com The CAP theorem asserts that any net­worked shared-data system can have only two of three desirable properties. How­ever, by explicitly handling partitions, designers can optimize consistency and availability, thereby achieving some trade-off of all three. Oracle DB with OEM in Amazon Cloud | Dr. Frank Munz www.munzandmore.com Dr. Frank Munz shares a video that screencast that explains "how to create an Oracle DB instance in AWS, how to enable OEM...and how to connect to your cloud instance with a local installation of NetBeans." Sample External Login.jsp page for Oracle Access Manager 11g | Brian Eidelman fusionsecurity.blogspot.com A-Team blogger Brian Eidelman expands on a previous post dealing with configuring OAM 11g to use an externally hosted custom login page. Bay Area Coherence Special Interest Group (BACSIG) Meeting June 7 coherence.oracle.com Date: Thursday, June 7, 2012 Time: 5:30pm – 9:00pm PT Where: Oracle Conference Center Room # 103 350 Oracle Parkway Redwood, Shores, CA Presentations: 6:00 p.m. - Coherence 101, The Evolution of Distributed Caching - Noah Arliss (Oracle) 7:00 p.m. - Optimizing Performance for Oracle Coherence and TopLink Grid at OOCL - Matt Rosen, Leo Limqueco (OOCL) 8:00 p.m. - Oracle Coherence Message Bus - Extreme Performance on Oracle Exalogic - Ballav Bihani (Oracle) Thought for the Day "I can't be left unsupervised." — Ron Wood (Born 06/01/1947 Source: Brainy Quote

    Read the article

  • ArchBeat Link-o-Rama Top 20 for March 18-24, 2012

    - by Bob Rhubart
    The top-twenty most-clicked links as shared via my social networks for the week of March 18-24, 2012. Oracle's ZFS Storage Appliance Simulator | Steen Schmidt Oracle Linux Online Forum - 4 sessions, 9 speakers + live chat March 27 OWSM vs. OEG - When to use which component - 11g | Prakash Yamuna Northeast Ohio Oracle Users Group 2 Day Seminar - May 14-15 - Cleveland, OH SOA! SOA! SOA!; OSB 11g Recipes and Author Interviews Webcast: Oracle Business Intelligence Mobile - March 27 - 10am PT / 1pm ET Oracle Hardware Systems: The Extreme Performance Tour - Dates and Locations Worldwide Oracle Cloud Conference: dates and locations worldwide Mismatch: Developer skills and customer demands | Floyd Teter OTN Virtual Developer Day - Java (APAC - in English) - March 27 Webcast Q&A: Demystifying External Authorization 2 New Cloud Computing resources added to free IT Strategies from Oracle library Encapsulating OIM API’s in a Web Service for OIM Custom SOA Composites | Alex Lopez Webcast: Simplify Oracle RAC Deployment with Oracle VM SOA gets mobilized; mobile gets SOA-ized: survey | Joe McKendrick Integrating with Oracle Fusion Applications: Discovering Integration Artifacts | Rajesh Raheja Oracle Access Manager 11g - useful links | Dmitry Nefedkin Anil Gaur on Cloud Computing Support in Java EE 7 Enterprise app shops announcements are everywhere | Andy Mulholland The extraordinary software development manager | Seth Godin Thought for the Day "Every large system that works started as a small system that worked. " — Anonymous

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >