Search Results

Search found 20970 results on 839 pages for 'real mode'.

Page 467/839 | < Previous Page | 463 464 465 466 467 468 469 470 471 472 473 474  | Next Page >

  • Oracle Applications Global User Experience

    - by ultan o'broin
    Today, we're launching Oracle's first ever blog for global user experience (UX) applications issues. We'll be talking about how we design and develop applications for global use, looking at the cultural factors, internationalization (I18n), localization (L10n) and language used for a start. We will also discuss how we study and work with real users so that our customers have applications that allow them to be productive regardless of where they are located in the world. In addition, we will inform you about any globally-related events we know about, and about product features, development frameworks, tools, information and relevant to our worldwide customers. Also, of course, we hope to hear from you, too. If you have anything you want to know about our global user experience, a localization you'd like, or cultural feature you think would be useful, then let us know. If you have any tips or guidelines you'd like to share in this space, then this blog is for you too! As far as global user experience is concerned, you don't have to be lost in translation. Hence the name of the blog!

    Read the article

  • Configuration Manager Setting Causing Error PRJ0019

    - by Jeff Paterno
    Recently I ran into an issue with a project failing to build on an automated build server using CruiseControl. When I looked into the build log I saw that the Post-Build project was failing with the error message: "error PRJ0019: A tool returned an error code from "Performing Post-Build Event..." This was most frustrating especially since the solution was building without issue on my local development environment. The Post-Build project was a C++ project that basically called several batch files to unregister/register assemblies, copy resources and supporting files, and place other dependencies in the GAC. I decided to run each of the batch files manually to see if that would provide more information as to why this project was failing. This lead me to determine that the batch file that was placing assemblies in the GAC was the culprit and that it was failing to find a particular assembly. The missing assembly was the output of another project. The project that was not producing the expected output was another C++ project that called a batch file. This batch process was actually embedding resource files into an assembly and then copying the assembly to the expected location. The real confusion started when I looked back into my Subversion log and noted that nothing had changed in this project in more than 2 months! It was almost as if the project had stopped building altogether. But what would cause that?! The Configuration Manager, obviously! Checking the solution's Configuration Manager settings, I found that the project that was not producing any output was in fact not selected to be part of the build process when the "Any CPU" platform was selected. This was the problem! I had recently updated the CruiseControl configurations to force the solution to be built targeting the platform "Any CPU". As a result, the project that was at the root of the problem was not configured to be built and the post-build process was failing when it couldn't find what it needed.

    Read the article

  • I'm a student learning C++ and I've recently found out about Ruby. Would learning (some of) Ruby help me with C++ or would it just confuse me?

    - by Von32
    Hi! As the title says, I'm a student that will be starting my second year of C++ very soon. I've discovered Ruby, however. While I've heard much buzz about the language before, I've disregarded it because I always thought it wasn't something that would be useful. However, I've found a number of FANTASTIC tutorials on ruby and am interested in learning it (probably because it seems so straightforward). Would playing around with ruby be a good or bad idea? I understand that there's not such thing as bad knowledge, but I'm afraid that Ruby will only confuse me when dealing with C++. How different from C++ is it? I've read it's based on C in some way, but my google-fu seems to be horrible today. How useful is Ruby in the real world? I'm not specifically asking about jobs- I'm more interested in what sort of applications may come from this language. Any specific examples worth looking at? Going back to Question two- I've read some posts on here that Ruby and C++ can hold hands once in a while. How flexible is this relationship? Is it rarely that this would work? Thank you Very much for your time! EDIT: This has to be the one community on the internet that doesn't suck. Why have I never posted before? You guys are awesome!

    Read the article

  • Unit Tests as a learning tool - a good idea?

    - by Ekkehard.Horner
    I'm interested in ways and means for learning (a) programming language(s) efficiently. I believe that using Unit Test concepts and infrastructure early in that process is a good thing, even better than starting with "Hello world". Why: To write a decent program even for a toy/restricted problem in a new language, you'll have to master many heterogenous concepts (control flow & variables & IO ...), you are tempted to glance over details just to get your program 'to work'. Putting (your understanding of) the facts about the new language in assertions with good descriptions (=success messages) enforces thinking thru/clearness/precision. Grouping topics and adding assertions to such groups is much easier than incorporation features from the 2. chapter of your "Learning X" book to your chapter 1 program. Why not: 'Real' Unit Tests are meant to output "1234 tests ok; 1 failure: saveWorld() chokes on negative input"; 'didactic' Unit Tests should output relevant facts about the new language like perl6 10-string.t # ### p5chop ... ok 13 - p5chop( "cbä" ) returns "ä" ok 14 - after that, victim is changed to "cb" # ### (p6) chop ... ok 27 - (p6) chop( "cbä" ) returns chopped copy: "cb" ok 18 - after that, victim is unchanged: "cbä" # ### chomp ... So (mis?)using Unit Tests may be counterproductive - practicing actions while learning you wouldn't use professionally. How: Writing 'didactic' Unit Tests in languages with lightweight testing systems (Perl 5/6) is easy; (mis?)using more elaborate systems (JUnit, CppUnit) may be not worth the effort or not suitable for a person just starting with a new language. So Is using Unit Tests as a learning tool a bad idea? Can the Unit Test tool(s) of your favourite language(s) used didactically? Should implementation details (eventually) be discussed here or over at stackoverflow.com?

    Read the article

  • mdadm: breaks boot due to "is not ready yet or not present" error

    - by BarsMonster
    This is so damn frustrating :-| I've spent like 20 hours on this nice error, and seems like dozens of people over Internet too, and no clear solution yet. I have non-system RAID-5 of 5 disks, and it's fine. But during boot up it says that "/dev/md0 is not ready yet or not present" and asks to press 'S'. Very nice for Ubuntu Server - I have to bring monitor and keyboard to go next. After this system boots and it's all fine. md0 device works, /proc/mdstat is fine. When I do mount -a - it mounts this array without errors and works fine. As a dumb and shameful workaround I added noauto in /etc/fstab, and did mounting in /etc/rc.local - it works fine then. Any hints how to make it work properly? fstab: UUID=3588dfed-47ae-4c32-9855-2d69df713b86 /var/bigfatdisk ext4 noauto,noatime,data=writeback,barrier=0,nobh,commit=5 0 0 mdadm config: It is autogenerated: # mdadm.conf # # Please refer to mdadm.conf(5) for information about this file. # # by default, scan all partitions (/proc/partitions) for MD superblocks. # alternatively, specify devices to scan, using wildcards if desired. DEVICE partitions # auto-create devices with Debian standard permissions CREATE owner=root group=disk mode=0660 auto=yes # automatically tag new arrays as belonging to the local system HOMEHOST <system> # instruct the monitoring daemon where to send mail alerts MAILADDR CENSORED # definitions of existing MD arrays ARRAY /dev/md/0 metadata=1.2 bitmap=/var/md0_intent UUID=efccbeb6:a0a65cd6:470dcdf3:62781188 name=LBox2:0 # This file was auto-generated on Mon, 10 Jan 2011 04:06:55 +0200 # by mkconf 3.1.2-2

    Read the article

  • Design of input files reading when it comes to defaults/transformations

    - by Stefano Borini
    Suppose you have an application that reads an input file, on a language that does not support the concept of None. The input is read, parsed, and the contents are stored on a structure for later use. Now, in general you want to keep into account transformation of the data from the input, such as adding default values when not specified, or adding full path information to relative path specified in the input. There are two different strategies to achieve this. The first strategy is to perform these transformations at input file reading time. In practice, you put all the intelligence into the input parser, and your application has no logic to deal with unexpected circumstances, such as an unspecified value. You lose the information of what was specified and what wasn't, but you gain in black-boxing the details. Your "running code" needs that information in any case and in a proper form, and is not concerned if it's the default or a user-specified information. The second strategy is to have the file reader a real one-to-one mapper from the file to a memory-stored object, with no intelligent behavior. unspecified values are not filled (which may however be a problem in languages not supporting None) and data is stored verbatim from the file. The intelligence for recovery must now go into the "running code", which must check what was specified in the file, eventually fall back to a default, or modify the input properly before using it. I would like to know your opinion on these two approaches, and in particular which one you found the most frequently implemented.

    Read the article

  • Using gluLookAt to move camera in 2D iPhone game ?

    - by Mr.Gando
    Hey guys, I'm trying to use gluLookAt to move the camera in my iPhone game, but every time I've tried to use gluLookAt my screen just goes "blank" ( grey in this case ) I'm trying to render a simple triangle and to move the camera, this is my code: to setup my scene I do: glViewport(0, 0, backingWidth, backingHeight); glMatrixMode(GL_PROJECTION); glLoadIdentity(); glRotatef(-90.0, 0.0, 0.0, 1.0); //using iPhone in horizontal mode glOrthof(-240, 240, -160, 160, -1, 1); glMatrixMode(GL_MODELVIEW); then my "triangle rendering" code looks like: GLfloat triangle[] = {0, 100, 100, 0, -100, 0,}; glClearColor(0.7, 0.7, 0.7, 1.0); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glEnableClientState(GL_VERTEX_ARRAY); glColor4f(1.0, 0.0, 0.0, 1.0); glVertexPointer(2, GL_FLOAT, 0, &triangle); glDrawArrays(GL_TRIANGLES, 0, 6); glDisableClientState(GL_VERTEX_ARRAY); This draws a red triangle in the middle of the screen, when I try to apply gluLookAt ( I got the implementation of the function from Cocos2D so I asume it's correct ), i do: glMatrixMode(GL_MODELVIEW); glLoadIdentity(); gluLookAt(0,0,1,0,0,0,0,0,1); // try to move the camera a bit ? GLfloat triangle[] = {0, 100, 100, 0, -100, 0,}; glClearColor(0.7, 0.7, 0.7, 1.0); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glEnableClientState(GL_VERTEX_ARRAY); glColor4f(1.0, 0.0, 0.0, 1.0); glVertexPointer(2, GL_FLOAT, 0, &triangle); glDrawArrays(GL_TRIANGLES, 0, 6); glDisableClientState(GL_VERTEX_ARRAY); This leads me to grey screen (glClearColor is grey), I've tried all sort of things and read what I've found about gluLookAt on the net, but no luck :(, if someone could explain me or show me how to move to move the camera in a top-down fashion ( zelda, etc ), I would really appreciate it. Thanks!

    Read the article

  • libgdx ActorGestureListener.pan() parameters not moving actor in smooth line

    - by Roar Skullestad
    I override the pan method in ActorGestureListener to implement dragging actors in libgdx (scene2d). When I move individual pieces on a board they move smoothly, but when moving the whole board, the x and y coordinates that is sent to pan is "jumping", and in an increasingly amount the longer it is dragged. These are an example of the deltaY coordinates sent to pan when dragging smoothly downwards: 1.1156368 -0.13125038 -1.0500145 0.98439217 -1.0500202 0.91877174 -0.984396 0.9187679 -0.98439026 0.9187641 -0.13125038 This is how I move the camera: public void pan (InputEvent event, float x, float y, float deltaX, float deltaY) { cam.translate(-deltaX, -deltaY); I have been using both the delta values sent to pan and the real position values, but similar results. And since it is the coordinates that are wrong, it doesn't matter whether I move the board itself or the camera. What could the cause be for this and what is the solution? When I move camera only half the delta-values, it moves smoothly but only at half the speed of the mouse pointer: cam.translate(-deltaX / 2, -deltaY / 2); It seems like the moving of camera or board affects the mouse input coordinates. How can I drag at "mouse speed" and still get smooth movements? (This question was also posted on stackoverflow: http://stackoverflow.com/questions/20693020/libgdx-actorgesturelistener-pan-parameters-not-moving-actor-in-smooth-line)

    Read the article

  • How do I work around sudo 'segmentation fault' on basic bash commands?

    - by sage
    I am sure the answers are out there, but alas there are too many answers (here and elsewhere) to other questions stopping me from finding them. I just encountered something substantially similar to what is described at the closed SO question, sudo : “segmentation fault” Ubuntu maverick [closed]. My team is using Ubuntu 11.04 on VMWare Workstation 8.0.4. We are doing development using c++, Xenomai, Qt, and Qt Creator. When we simulate our application on the virtual machine, we currently need to launch Qt Creator with sudo. My colleague mentioned today that he has been having issues where his workstation locks up and he needs to restart and that occasionally he has the issue that all sudo bash commands return "segmentation fault". I just ran our application in simulation mode. I was running Qt Creator under sudo and Qt Creator received the signal abort (if I recall). Afterward, every command executed with sudo from sudo qtcreator to sudo ls resulted in the message Segmentation fault. I clicked on the power widget to see if I could log out, but the system shut down straightaway without prompting. My understanding is that we run sudo because of a permissions issue with Xenomai and the VM as currently configured, but my colleague has a workaround for this. I expect that not running Qt Creator under sudo -- something that has always made me nervous -- will help contain this issue, but I find it troubling that this could happen and manifest as it does. Does anyone know what is happening? Any recommendations on how to work around this issue? This is happening often to I am trying tolobby for VM changes to be able to run the process without sudo.

    Read the article

  • What is the best way to code the XNA Game Server for FPS game?

    - by AgentFire
    I'm writing a FPS XNA game. It gonna be multiplayer so I came up with following: I'm making two different assemblies — one for the game logic and the second for drawing it and the game irrelevant stuff (like rocket trails). The type of the connection is client-server (not peer-to-peer), so every client at first connects to the server and then the game begins. I'm completly decided to use XNA.Framework.Game class for the clients to run their game in window (or fullscreen) and the GameComponent/DrawableGameComponent classes to store the game objects and update&draw them on each frame. Next, I want to get the answer to the question: What should I do on the server side? I got few options: Create my own Game class on the server, which will process all the game logic (only, no graphics). The reason why I am not using the standart Game class is when I call Game.Run() the white window appears and I cant figure out how to get rid of it. Use somehow the original XNA's Game class, which is already has the GameComponent collection and Update event (60 times per second, just what I need). UPDATE: I got more questions: First, what socket mode should I use? TCP or UDP? And how to actually let the client know that this packet is meant to be processed after that one? Second, if I is going to use exacly GameComponent class for the game objects which is stored and process on the server, how to make them to be drawn on the client? Inherit them (while they are combined to an assembly)? Something else?

    Read the article

  • Can't boot into windows7/ubuntu 12.04 after running boot-repair

    - by Rini
    I have installed Ubuntu 12.04 on my preinstalled windows 7 Sony vaio E series laptop following instructions here: http://www.linuxbsdos.com/2012/05/17/how-to-dual-boot-ubuntu-12-04-and-windows-7/ Everything went well and I am able to boot in to windows after complete installation of Ubuntu. Now following instructions on web I tried to add Ubuntu to my BIOS using Easy BCD (but forget to add windows 7 entry). As a result, I loose windows 7 OS and can't boot in to either OS then I successfully repaired windows 7 using recovery CD. Now my problem is that I can't reinstall Ubuntu 12.04 using Live CD it halts every time before disk partition step giving error. "ubi-partman crashed". "ubi-partman failed with exit code 141. further information may be found in /var/log/syslog. Do you want to try running this step again before continuing? If you do not, your installation may fail entirely or may be broken." and, any choice to continue will result in the same error. After that following some post solutions I ran boot-repair commands in terminal ( in Try Ubuntu mode) and got the following URL: http://paste.ubuntu.com/1206434/ Now, after restart I can't boot into either Windows or Ubuntu. Even any attempt to run Windows repair is failed and I got the message : 'No operating System found' I don't know what went wrong after running boot-repair command. Please help in solving this issue. Thanks and Regards, R Shukla

    Read the article

  • How to Fix this specific Google "Fetch as Googlebot" error appearing on my Webmaster Tools?

    - by UXdesigner
    Good day, I'm currently finding out why I have lost all of my website's rank in google. I don't even appear in google results by the domain. But other sites do link me and they appear in the google results. I think it's all about leaving my site two months alone and finding out I had 20k in comment spam, which I completely deleted and fixed with filters and adding a new Disqus comment service. Thing is, I added my site to Google Webmaster Tools and I'm finding out several awful things. For example, when I click in Google Fetch As GoogleBot. I receive this error message below in response to my request. And I don't even know what's the real problem and how to fix it. I simply don't get it. This is what appears: Date: Wednesday, July 20, 2011 9:43:35 AM PDT Googlebot Type: Web Download Time (in milliseconds): 55 HTTP/1.1 403 Forbidden Date: Wed, 20 Jul 2011 16:43:36 GMT Server: Apache Vary: Accept-Encoding Content-Encoding: gzip Content-Length: 248 Keep-Alive: timeout=2, max=100 Connection: Keep-Alive Content-Type: text/html; charset=iso-8859-1 403 Forbidden Forbidden You don't have permission to access / on this server. Additionally, a 403 Forbidden error was encountered while trying to use an ErrorDocument to handle the request. Do you guys know anything about this problem ? I need to have Google crawl my site again. I used to have a really nice google result in the past three years. Now, there's nothing. thanks,

    Read the article

  • How to number nested ordered lists.

    - by Wes
    Is there any way through CSS to style nested ordered lists to display sub numbers? The idea is similar to using heading levels. However what I'd really like to see is the following. Note each of these subsections has text not just a title. This isn't a real example just some organisational stuff. Now I know I can use <h1>-<h6> but nested lists would be much clearer and allow for different indentation styling. Also it would be symentically correct. Note I don't think that <h1>-<h6> are correct in many ways as the name doesn't apply to the whole section. 1 Introduction. 1.1 Scope Blah Blah Blah Blah Blah Blah 1.2 Purpose Blah Blah Blah Blah Blah Blah 2 Cars Blah Blah Blah Blah Blah Blah 2.1 engines sub Blah Blah Blah sub Blah Blah Blah 2.2 Wheels ... ... 2.10.21 hub caps sub-sub Blah Blah Blah sub-sub Blah Blah Blah 2.10.21.1 hub cap paint sub-sub-sub Blah Blah Blah sub-sub-sub Blah Blah Blah 3 Planes 3.1 Commercial Airlines. ... ... 212 Glossary

    Read the article

  • Dot Matrix printers setup...

    - by Parhs
    Hello! I am using debian which is similar to ubuntu. They have 7 dot matrix printers some very old like this one http://www.omnidatasys.net/product/desc_printer_ti880.htm which works from 1979 daily and at text is faster than many inkjects. I believe that it has his own language... Sending text to serial port (port server) prints garbage. However i think is prints only capital english up to 95 asccii and greek and the rest up to 127 i think greek capital.(special chip ) Sending english capital letters prints garbage i think but i amnt sure... i will try again... The other printer are ESC/P compatible and i use generic epson driver provided from ghostscript... However i think that sending text via lp -dpr1 filename It prints the text as a grafic...Changing from printer font face(courier,times roman etc) or pitch has no effect... I am wondering if is there any work arround for this? In AIX they claim that lp command printed output as text as it prints and cobol programs send raw text to to lp printers . However in AIX they use some custom filters for the printers and has more options for dot matrix printers.. I would like to know if there is a solution for this.. To avoid graphics mode for text and change font face somehow.. The most Straight-through approach would be to use no driver ,just send ESC/P from cobol but this requires too much work... Thank you again!

    Read the article

  • Microsoft MVP Again for 2011

    - by Vincent Maverick Durano
    Normal 0 false false false EN-PH X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} I just got a great news from Microsoft that I’m re-awarded as Microsoft MVP (Most Valuable Professional) for this year.  This is my 3rd year in a row as an MVP and  I’m of course very happy about and feel honored by it. Woohoo!! Here’s the Proof =} Dear Vincent Maverick Durano, Congratulations! We are pleased to present you with the 2011 Microsoft® MVP Award! This award is given to exceptional technical community leaders who actively share their high quality, real world expertise with others. We appreciate your outstanding contributions in ASP.NET/IIS technical communities during the past year. The Microsoft MVP Award provides us the unique opportunity to celebrate and honor your significant contributions and say "Thank you for your technical leadership."     BIG thanks to Microsoft, my MVP Lead Lilian Quek, readers, and everyone who has supported me!!!

    Read the article

  • ArchBeat Link-o-Rama for October 29, 2013

    - by OTN ArchBeat
    Exceptions Handling and Notifications in ODI | Christophe Dupupet Oracle Fusion Middleware A-Team director Christophe Dupupet reviews the techniques that are available in Oracle Data Integrator to guarantee that the appropriate individuals are notified in the event that ODI processes are impacted by network outages or other mishaps. Tech Article: SOA in Real Life: Mobile Solutions The latest article in the Industrial SOA series looks at mobile computing and how companies are developing SOA to go. Oracle Coherence, Split-Brain and Recovery Protocols In Detail | Ricardo Ferreira Ricardo Ferreira's article "provides a high level conceptual overview of Split-Brain scenarios in distributed systems," focusins on a "specific example of cluster communication failure and recovery in Oracle Coherence." WebLogic & FMW Provisioning update | Edwin Biemond "Provisioning was a hot topic on Oracle Openworld 2013," says Oracle ACE Edwin Biemond. His latest blog post discusses what is now possible with WebLogic and Fusion Middleware, and looks at what might be possible in the future. Reusing and Extending ADF BC Entities from Common Model | Andrejus Baranovskis Oracle ACE Director Andrejus Baranovskis' post is about "ADF architecture and better application structuring with EO reuse from a common model." Andrejus describes "how to implement additional requirements to common model in extended ADF BC Entities." Thought for the Day "I work hard, I work late, I have nothing on my conscience. When I go to bed, I sleep." — Ellen Johnson Sirleaf, 24th and current President of Liberia (Born 29 October 1938) Source: brainyquote.com

    Read the article

  • Oracle Partner Days and Oracle Days are coming to a city in EMEA near you!

    - by Javier Puerta
    Oracle Partner Days A new round of Oracle Partner Days is coming to a large number of European cities. These events are exclusive for Oracle partners and will deliver to you real Business return on your OPN membership.You will hear the business opportunities coming from the adoption of the entire Oracle stack, the latest products value propositions and related sales strategy and be able to connect directly with Oracle executives and find new business opportunities with other partners in your region.The EMEA Oracle Partner Days are Local/Regional live events targeting the key contacts in sales and consultancy delivering Oracle strategy, engaging around the several perspectives of the Oracle portfolio, executive keynotes and deep dive Business content-related breakout sessions. The first city will be Frankfurt, on Oct. 29. Check the full list to find an Oracle Partner Day in a city near you. Oracle Days Oracle Days will be hosted after Oracle OpenWorld across EMEA, along October and November. By attending an Oracle Day, customers and partners can: Learn about how to leverage the power of the Oracle stack, by hearing customer case studies about successful business transformation, and by following cross-stack solution tracks within the agenda Discuss key issues for business and IT executives in cloud, big data, social, and mobile solutions, and network with peers who are facing the same challenges Meet Oracle experts and watch live demos of new products Get the latest news from Oracle OpenWorld. See full calendar and cities here

    Read the article

  • Best practices for caching search queries

    - by David Esteves
    I am trying to improve performance of my ASP.net Web Api by adding a data cache but I am not sure how exactly to go about it as it seems to be more complex than most caching scenarios. An example is I have a table of Locations and an api to retrieve locations via search, for an autocomplete. /api/location/Londo and the query would be something like SELECT * FROM Locations WHERE Name like 'Londo%' These locations change very infrequently so I would like to cache them to prevent trips to the database for no real reason and improve the response time. Looking at caching options I am using the Windows Azure Appfabric system, the problem is it's just a key/value cache. Since I can only retrieve items based on keys I couldn't actually use it for this scenario as far as Im aware. Is what I am trying to do bad use of a caching system? Should I try looking into NoSql DB which could possibly run as a cache for something like this to improve performance? Should I just cache the entire table/collection in a single key with a specific data structure which could assist with the searching and then do the search upon retrieval of the data?

    Read the article

  • How Can I Know Whether I Am a Good Programmer?

    - by Kristopher Johnson
    Like most people, I think of myself as being a bit above average in my field. I get paid well, I've gotten promotions, and I've never had a real problem getting good references or getting a job. But I've been around enough to notice that many of the worst programmers I've worked with thought they were some of the best. Bad programmers who are surrounded by other bad programmers seem to be the most self-deluded. I'm certainly not perfect. I do make mistakes. I do miss deadlines. But I think I make about the same number of bonehead moves that "other good programmers" do. The problem is that I define "other good programmers" to mean "people who are like me." So, I wonder, is there any way a programmer can make some sort of reasonable self-evaluation? How do we know whether we are good or bad at our jobs? Or, if terms like good and bad are too ill-defined, how can programmers honestly identify their own strengths and weaknesses, so that they can take advantage of the former and work to improve the latter?

    Read the article

  • SQL SERVER – Tell me What You Want to Listen – My 2 TechED 2011 Sessions

    - by pinaldave
    I am going to present two sessions at TechEd India on March 25th, 2011. I would like to know what do you want me to cover in this session. Watch the video taken by my wife when I was preparing for the session. Sessions Date: March 25, 2011 Understanding SQL Server Behavioral Pattern – SQL Server Extended Events Date and Time: March 25, 2011 12:00 PM to 01:00 PM SQL Server Waits and Queues – Your Gateway to Perf. Troubleshooting Date and Time: March 25, 2011 04:15 PM to 05:15 PM I promise following for both of my sessions: I will share the scripts demonstrated in the session right at the end of the sessions The sessions will be 300-400 level but I promise to make the concept very simple Less slides and lots of meaningful Demos Session close to real life cases and scenarios Surprise gifts to best participants I promise to answer all the questions either in session or right after the hall after the session Lots of Technical Education and FUN! Please leave your comments with your expectation and if you are going to attend the session do let me know here. We will for sure meet at the event and do some interesting talk. You can read the abstract of the session over here. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, T SQL, Technology Tagged: TechEd, TechEdIn

    Read the article

  • Akka react vs receive

    - by Will I Am
    I am reading my way through Akka tutorials, but I'd like to get my feet wet with a real-life scenario. I'd like to write both a connectionless UDP server (an echo/ping-pong service) and a TCP server (also an echo service, but it keeps the connection open after it replies). My first question is, is this a good experimental use case for Akka, or am I better served with more common paradigms like IOCP? Would you do something like this with Akka in production? Although I understand conceptually the difference between react() and receive(), I struggle to choose one or the other for the two models. In the UDP model, there is no concept of who the sender is on the server, once the pong is sent, so should I use receive()? In the TCP model, the connection is maintained on the server after the pong, so should I use react()? If someone could give me some guidance, and maybe an opinion on how you'd design these two use cases, it would take me a long way. I have found a number of examples, but they didn't have explanations as to why they chose the paradigms they did.

    Read the article

  • I don't know C. And why should I learn it?

    - by Stephen
    My first programming language was PHP (gasp). After that I started working with JavaScript. I've recently done work in C#. I've never once looked at low or mid level languages like C. The general consensus in the programming-community-at-large is that "a programmer who hasn't learned something like C, frankly, just can't handle programming concepts like pointers, data types, passing values by reference, etc." I do not agree. I argue that: Because high level languages are easily accessible, more "non-programmers" dive in and make a mess In order to really get anything done in a high level language, one needs to understand the same similar concepts that most proponents of "learn-low-level-first" evangelize about. Some people need to know C; those people have jobs that require them to write low to mid-level code. I'm sure C is awesome, and I'm sure there are a few bad programmers who know C. Why the bias? As a good, honest, hungry programmer, if I had to learn C (for some unforeseen reason), I would learn C. Considering the multitude of languages out there, shouldn't good programmers focus on learning what advances us? Shouldn't we learn what interests us? Should we not utilize our finite time moving forward? Why do some programmers disagree with this? I believe that striving for excellence in what you do is the fundamental deterministic trait between good programmers and bad ones. Does anyone have any real world examples of how something written in a high level language—say Java, Pascal, PHP, or Javascript—truely benefitted from a prior knowledge of C? Examples would be most appreciated.

    Read the article

  • Big GRC: Turning Data into Actionable GRC Intelligence

    - by Jenna Danko
    While it’s no longer headline news that Governments have carried out large scale data-mining programmes aimed at terrorism detection and identifying other patterns of interest across a wide range of digital data sources, the debate over the ethics and justification over this action, will clearly continue for some time to come. What is becoming clear is that these programmes are a framework for the collation and aggregation of massive amounts of unstructured data and from this, the creation of actionable intelligence from analyses that allowed the analysts to explore and extract a variety of patterns and then direct resources. This data included audio and video chats, phone calls, photographs, e-mails, documents, internet searches, social media posts and mobile phone logs and connections. Although Governance, Risk and Compliance (GRC) professionals are not looking at the implementation of such programmes, there are many similar GRC “Big data” challenges to be faced and potential lessons to be learned from these high profile government programmes that can be applied a lot closer to home. For example, how can GRC professionals collect, manage and analyze an enormous and disparate volume of data to create and manage their own actionable intelligence covering hidden signs and patterns of criminal activity, the early or retrospective, violation of regulations/laws/corporate policies and procedures, emerging risks and weakening controls etc. Not exactly the stuff of James Bond to be sure, but it is certainly more applicable to most GRC professional’s day to day challenges. So what is Big Data and how can it benefit the GRC process? Although it often varies, the definition of Big Data largely refers to the following types of data: Traditional Enterprise Data – includes customer information from CRM systems, transactional ERP data, web store transactions, and general ledger data. Machine-Generated /Sensor Data – includes Call Detail Records (“CDR”), weblogs and trading systems data. Social Data – includes customer feedback streams, micro-blogging sites like Twitter, and social media platforms like Facebook. The McKinsey Global Institute estimates that data volume is growing 40% per year, and will grow 44x between 2009 and 2020. But while it’s often the most visible parameter, volume of data is not the only characteristic that matters. In fact, according to sources such as Forrester there are four key characteristics that define big data: Volume. Machine-generated data is produced in much larger quantities than non-traditional data. This is all the data generated by IT systems that power the enterprise. This includes live data from packaged and custom applications – for example, app servers, Web servers, databases, networks, virtual machines, telecom equipment, and much more. Velocity. Social media data streams – while not as massive as machine-generated data – produce a large influx of opinions and relationships valuable to customer relationship management as well as offering early insight into potential reputational risk issues. Even at 140 characters per tweet, the high velocity (or frequency) of Twitter data ensures large volumes (over 8 TB per day) need to be managed. Variety. Traditional data formats tend to be relatively well defined by a data schema and change slowly. In contrast, non-traditional data formats exhibit a dizzying rate of change. Without question, all GRC professionals work in a dynamic environment and as new services, new products, new business lines are added or new marketing campaigns executed for example, new data types are needed to capture the resultant information.  Value. The economic value of data varies significantly. Typically, there is good information hidden amongst a larger body of non-traditional data that GRC professionals can use to add real value to the organisation; the greater challenge is identifying what is valuable and then transforming and extracting that data for analysis and action. For example, customer service calls and emails have millions of useful data points and have long been a source of information to GRC professionals. Those calls and emails are critical in helping GRC professionals better identify hidden patterns and implement new policies that can reduce the amount of customer complaints.   Now on a scale and depth far beyond those in place today, all that unstructured call and email data can be captured, stored and analyzed to reveal the reasons for the contact, perhaps with the aggregated customer results cross referenced against what is being said about the organization or a similar peer organization on social media. The organization can then take positive actions, communicating to the market in advance of issues reaching the press, strengthening controls, adjusting risk profiles, changing policy and procedures and completely minimizing, if not eliminating, complaints and compensation for that specific reason in the future. In this one example of many similar ones, the GRC team(s) has demonstrated real and tangible business value. Big Challenges - Big Opportunities As pointed out by recent Forrester research, high performing companies (those that are growing 15% or more year-on-year compared to their peers) are taking a selective approach to investing in Big Data.  "Tomorrow's winners understand this, and they are making selective investments aimed at specific opportunities with tangible benefits where big data offers a more economical solution to meet a need." (Forrsights Strategy Spotlight: Business Intelligence and Big Data, Q4 2012) As pointed out earlier, with the ever increasing volume of regulatory demands and fines for getting it wrong, limited resource availability and out of date or inadequate GRC systems all contributing to a higher cost of compliance and/or higher risk profile than desired – a big data investment in GRC clearly falls into this category. However, to make the most of big data organizations must evolve both their business and IT procedures, processes, people and infrastructures to handle these new high-volume, high-velocity, high-variety sources of data and be able integrate them with the pre-existing company data to be analyzed. GRC big data clearly allows the organization access to and management over a huge amount of often very sensitive information that although can help create a more risk intelligent organization, also presents numerous data governance challenges, including regulatory compliance and information security. In addition to client and regulatory demands over better information security and data protection the sheer amount of information organizations deal with the need to quickly access, classify, protect and manage that information can quickly become a key issue  from a legal, as well as technical or operational standpoint. However, by making information governance processes a bigger part of everyday operations, organizations can make sure data remains readily available and protected. The Right GRC & Big Data Partnership Becomes Key  The "getting it right first time" mantra used in so many companies remains essential for any GRC team that is sponsoring, helping kick start, or even overseeing a big data project. To make a big data GRC initiative work and get the desired value, partnerships with companies, who have a long history of success in delivering successful GRC solutions as well as being at the very forefront of technology innovation, becomes key. Clearly solutions can be built in-house more cheaply than through vendor, but as has been proven time and time again, when it comes to self built solutions covering AML and Fraud for example, few have able to scale or adapt appropriately to meet the changing regulations or challenges that the GRC teams face on a daily basis. This has led to the creation of GRC silo’s that are causing so many headaches today. The solutions that stand out and should be explored are the ones that can seamlessly merge the traditional world of well-known data, analytics and visualization with the new world of seemingly innumerable data sources, utilizing Big Data technologies to generate new GRC insights right across the enterprise.Ultimately, Big Data is here to stay, and organizations that embrace its potential and outline a viable strategy, as well as understand and build a solid analytical foundation, will be the ones that are well positioned to make the most of it. A Blueprint and Roadmap Service for Big Data Big data adoption is first and foremost a business decision. As such it is essential that your partner can align your strategies, goals, and objectives with an architecture vision and roadmap to accelerate adoption of big data for your environment, as well as establish practical, effective governance that will maintain a well managed environment going forward. Key Activities: While your initiatives will clearly vary, there are some generic starting points the team and organization will need to complete: Clearly define your drivers, strategies, goals, objectives and requirements as it relates to big data Conduct a big data readiness and Information Architecture maturity assessment Develop future state big data architecture, including views across all relevant architecture domains; business, applications, information, and technology Provide initial guidance on big data candidate selection for migrations or implementation Develop a strategic roadmap and implementation plan that reflects a prioritization of initiatives based on business impact and technology dependency, and an incremental integration approach for evolving your current state to the target future state in a manner that represents the least amount of risk and impact of change on the business Provide recommendations for practical, effective Data Governance, Data Quality Management, and Information Lifecycle Management to maintain a well-managed environment Conduct an executive workshop with recommendations and next steps There is little debate that managing risk and data are the two biggest obstacles encountered by financial institutions.  Big data is here to stay and risk management certainly is not going anywhere, and ultimately financial services industry organizations that embrace its potential and outline a viable strategy, as well as understand and build a solid analytical foundation, will be best positioned to make the most of it. Matthew Long is a Financial Crime Specialist for Oracle Financial Services. He can be reached at matthew.long AT oracle.com.

    Read the article

  • Configuration tools for multiple monitors for X / Linux

    - by richard
    I have Ubuntu 10.04 running gnome and two monitors. I am wondering if a can get a better multi-monitor configuration tool. The one I have, gnome-display-properties, has too many problems, including: When I swapped my monitors over, the narrower (external) one now on the left. There is a width calculation error, such that I have a virtual monitor the width of the wide-monitor on the narrow-monitor and part of the wide monitor. And a virtual narrow-monitor on the remainder of the wide-monitor. Also the visible mouse pointer does is not aligned with the active spot, an x offset of one monitor width. I would like, in approximate order of importance: nobugs. to be able to select which is primary monitor. to have multiple configurations. configurations to be automatically selected based on which monitors are attached. configurations to be cycled (reliably) when display mode key is pressed. when a display is deactivated, for windows to migrate to remaining monitors. option to not change display resolution when mirroring, but to use side/top blanking bars to pad out screen.

    Read the article

  • Internship in License Contract Management

    - by cristian.condurache(at)oracle.com
    Hi Everyone, My name is Luca. I am an intern in the License Contract Management team in Italy. I have studied Economics and Business in Pescara and finished my Master’s Degree in July 2009. After a short work experience near my home town I decided to look for a job in an International Company. I got in touch with Oracle in January 2010. I had a telephone interview and then a face-to-face interview. On a cold and grey morning, I arrived in Milan....my first impression was fantastic....a big modern building with wide TVs everywhere. I was a little nervous but very excited. I understood this could be a great opportunity... The interview went well and I started to work in March. After a training period I was quickly involved in the closing of the last quarter of the fiscal year - of which May is the last month at Oracle. Working as a License Contract Manager is a real challenge for a fresh graduate. It involves thoroughly understanding the Oracle Policies and Practices with regards to License Contracts. In my experience, especially in May, I learnt to work under high pressure, within time constrains, and to keep up with constant changes. In this period I also had the opportunity to be involved in different negotiations, being directly in contact with the customers. This helped me to develop my relational skills during complex transactions. Looking back at the nine months at Oracle I can say I have a better understanding of the IT world. It is a complex environment that changes continously, offering new challenges to learn from everytime. If you have any questions related to this article feel free to contact [email protected]. You can find our job opportunities via http://campus.oracle.com. Technorati Tags: License Contract Management,oppotunity,Oracle Policies,internship

    Read the article

< Previous Page | 463 464 465 466 467 468 469 470 471 472 473 474  | Next Page >