Daily Archives

Articles indexed Wednesday March 16 2011

Page 3/14 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Passing Parameters Between Web-Services and JSF Pages

    - by shay.shmeltzer
    This is another quick demo that shows a common scenario that combines several demos I did in the past. The scenario – we have two web services, one returns a list of objects, the other allows us to update an object. We want to build a page flow where the first page shows us the list of objects, allows us to select one, and then we can edit that instance in the next page and call the second web service to update our data source. The demo shows: How to select a row and save the object value in a pageFlowScope. (using setPropertyListener). How to create a page that allows me to modify the value of the pageFlowScope object, and how to pass the object as a parameter to the second Web service. Check it out here:

    Read the article

  • Oracle Exalogic Elastic Cloud - Planned Webcasts

    - by chuck.speaks
    I’m putting together a collection of recorded webcasts around Oracle Exalogic Elastic Cloud (Exalogic).  The plan is to do a systems overview and then multiple deep dives into hardware and software components that make up the engineered system. Those of you that are members of our partner community (Oracle Partner Network), drop me a note if you are interested in a full blown in-class delivery via PTS resources.  There is no schedule for these workshops but if there is enough interest, I would venture to guess it would roll out soon. Those of you with applications certified on Oracle WebLogic server that would like to scale to Exalogic, see me or watch this space.   Chuck Speaks chuck <dot> speaks at oracle <dot> com

    Read the article

  • Have You Checked Our BI Publisher Channel at Youtube ?

    - by kanichiro.nishida
    These days, more and more people watching video online rather than reading. Steve Jobs once said people don’t read anymore. Well, I love books and still read a lot either on books, magazine, iPad, MacbookPro, or whatever the medium shows me letters! But I have to admit, sometimes it’s much easier to understand especially something like How-To by just watching video clips than reading it. And this is why we started our BI Publisher Channel at Youtube last summer. Since then we have uploaded over 10 video clips so far and and now we’re gearing up to add more and more clips. Now, we’re in a middle of finishing up our work for the next 11G 1st patchset release, which should be coming soon and will have a lot of great new features that I can’t wait to talk to you guys about. And of course we’re preparing introduction and How-Top clips. So please subscribe the BI Publisher channel now if you haven’t done yet and stay tuned for the new clips! http://www.youtube.com/user/bipublisher Also, we’d love to hear your comments for each clip, so please don’t forget leaving your comments there after you watch!

    Read the article

  • OWB 11gR2 &ndash; OMB and File Editing

    - by David Allan
    Here we will see how we can use the IDE for editing OMB scripts. The 11gR2 release is based on the common Oracle platform IDE used also by JDeveloper. It comes with a bunch of standard behavior for editing and rendering code. One of the lesser known things is that if you drop a text file into OWB you can edit it. So you can drop your tcl scripts right into OWB and edit in-place, and don’t need another IDE like Eclipse just for this task. Cool, so you have the file here. There may be no line numbers, you can toggle line numbers on by right clicking in the gutter. If we edit the file within the OWB IDE, the save is a little different from normal. OWB doesn’t normally manipulate files so things like ctrl-s to save, saves the OWB objects, but if you edit a file the closing of the file will ask if you want to save it – check it out. Now we enter the realm of ‘he who dares’…. Note the IDE doesn’t know about tcl files out of the box, so you see above there is no syntax highlighting. The code is identified by the extension… .java is java, .html is HTML etc. With OWB, the OMB scripts are tcl, we usually have .tcl extension on these files. One of the things we can do to trick up the syntax highlighting is to simply rename the file to have a .java suffix, then all of a sudden we get syntax highlighting, see the illustration here where side by side we see a the file with a .java extension and a .tcl extension. Not ideal pretending to be .java but gets us a way to having something more useful than notepad. We can then change the syntax highlighting such that we get Eclipse like highlighting within the IDE from the Tools Preferences option; You then get the Eclipse like rendering albeit using a little tweak on the file names… Might be useful if you are doing any kind of heavy duty OMB script development and just want a single IDE. The OMBPlus panel is then at hand for executing and testing it out.

    Read the article

  • Oracle WebCenter: Social Networking & Collaboration

    - by kellsey.ruppel(at)oracle.com
    We’ve talked in previous weeks about the key goals of the new release of WebCenter are providing a Modern User Experience, unparalleled Application Integration, converging all the best of the existing portal platforms into WebCenter and delivering a Common User Experience Architecture.  We’ve provided an overview of Oracle WebCenter and discussed some of the other key goals in previous weeks, and this week, we’ll focus on how the new release of Oracle WebCenter provides unprecedented Social Networking and Collaboration.We recently talked with Carin Chan, Principal Product Manager at Oracle, around the topic of Social Networking and Collaboration. In today’s work environment, employees have come to expect social and collaborative services to augment their work environment. Whether it is to post a blog or to poll fellow coworkers, employees expect and demand access to highly integrated, collaborative work environments that allow them to quickly contribute at work -- whether it is to make informed decisions, contribute on projects, or share knowledge.Social and collaborative services from Oracle WebCenter add an immeasurable amount of value to achieving a modern user experience. Oracle WebCenter Services provides rich and comprehensive social computing services that include services such as wikis, blogs, instant messaging, presence, activity streams and graphs, and polls/surveys that offer employees access to rich collaborative services to work efficiently.Employees can create pages or spaces that mix and match collaborative services while bringing in data from other applications to share with groups, teams, or organizations. These out of the box social and collaborative services include: People Connections and Activity Streams enable users to quickly assemble and visualize their social business networks and track user activities.Activity Graphs tracks all user activities in real-time and gathers intelligence about these users, their connections and the way they use information to make educated recommendations and provide on the spot information discovery.Wikis and blogs enable the community authoring of documents and sharing of ideas and also allow for the gathering of feedback and comments on those ideas.Tags and links allow users to easily mark, connect and share information with others.RSS feeds are available to track new or changed information related to discussion forums, processes or activities in an Oracle WebCenter environment.Discussion forums enable sharing of group knowledge and easy creation of communities around specific topics.Announcements allow you to manage and publish important news to your user base.Instant Messaging and Presence enable real-time awareness and communication with available users in the context of a business task.Web and Voice Conferencing enables real-time communication with internal and external business users.Lists provide a way to manage list data directly on the web as well as export and import it from and to Microsoft Excel.Oracle WebCenter Analytics provides comprehensive reporting metrics on activity and content usage within portals or composite applications.Activity Streams allow you to track activities and visualize your business networks.While being able to integrate into your portal deployment, these services are also integrated into how users are already working. This includes integration with software such as Microsoft Outlook, Microsoft Office and mobile devices such as the Apple iPhone. These services are just a tip of the iceberg regarding social and collaborative services that Oracle WebCenter has to offer your employees. Be sure to keep checking back this week for in future posts, we’ll delve deeper into a few of these collaborative services and discuss how a combination of collaborative services offer a better portal deployment to empower business users. Technorati Tags: UXP, collaboration, enterprise 2.0, modern user experience, oracle, portals, webcenter, social, activity streams, blogs, wikis

    Read the article

  • Using R on your Oracle Data Warehouse

    - by jean-pierre.dijcks
    Since it is Predictive Analytics World in our backyard (or are we San Francisco’s backyard…?) I figured it is well worth the time to dust of some old but important news. With big data (should we start calling it “any data analytics” instead?) being the buzz word and analytics the key operative goal, not moving data around is becoming more and more critical to the business users. Why? Because instead of spending time on moving data around into your next analytics server you should be running analytics on those CPUs. You could always do this with Oracle Data Mining within the Oracle Database. But a lot of folks want to leverage R as their main tool. Well, this article describes how you can do this, since 2010… As Casimir Saternos concludes in the article; “There is a growing awareness of the need to effectively analyze astronomical amounts of data, much of which is stored in Oracle databases. Statistics and modeling techniques are used to improve a wide variety of business functions. ODM accessed using the R language increases the value of your data by uncovering additional information. RODM is a powerful tool to enable your organization to make predictions, classify data, and create visualizations that maximize effectiveness and efficiencies.” Happy Analysis!

    Read the article

  • Eine Klasse f&uuml;r sich: Das Oracle Partner Diamant Level

    - by A&C Redaktion
    Es gibt Oracle Partner, die sind so gut, dass uns die Level im OPN Spezialisierungsprogramm ausgegangen sind. Nun, Oracle ist nicht umsonst bekannt für lösungsorientiertes Arbeiten und so fand sich auch in diesem Fall eine brilliante Lösung: Das Diamant Level. Im September durften wir mit dem weltweit tätigen IT-Unternehmen Infosys unseren zweiten höchstqualifizierten Partner nach Accenture im Diamant Level willkommen heißen. An dieser Stelle noch einmal herzliche Glückwünsche nach Bangalore! Infosys zeichnet sich durch exzellentes Fachwissen über Oracle Lösungen und umfassende Beratungskompetenz aus. Mit 25 000 Oracle Beratern weltweit und mit 30 Spezialisierungen, fünf davon "Advanced Specializations", hat sich Infosys mehr als qualifiziert, um die höchste Auszeichnung und damit einmalige Vorteile zu erhalten. Genau für solche Unternehmen ist das Diamant Level gedacht: für Partner, die in das gesamte Lösungs-Portfolio von Oracle investiert und den höchsten Grad an Spezialisierung erreicht haben. Besonderen Wert legen wir darauf, dass Diamant Partner weltweit anspruchsvolle Kunden beraten und individuell abgestimmte, hochwertige Oracle Lösungen bereitstellen. Im Gegenzug hat das Diamant Level auch einiges zu bieten: Durch die herausragende Platzierung auf den Oracle Produktseiten und das Zertifikat "Assigned Global Alliance Manager" erreichen Sie Millionen potentieller Kunden. Zudem haben Sie Zugang zu einzigartigen Support- und Qualifizierungsmöglichkeiten. Haben wir Ihr Interesse geweckt? Auf unseren Partnerseiten können Sie ihr Unternehmen ganz einfach für das OPN Programm anmelden, Spezialisierungen absolvieren und Level um Level aufsteigen.

    Read the article

  • Multidimensional Thinking–24 Hours of Pass: Celebrating Women in Technology

    - by smisner
    It’s Day 1 of #24HOP and it’s been great to participate in this event with so many women from all over the world in one long training-fest. The SQL community has been abuzz on Twitter with running commentary which is fun to watch while listening to the current speaker. If you missed the fun today because you’re busy with all that work you’ve got to do – don’t despair. All sessions are recorded and will be available soon. Keep an eye on the 24 Hours of Pass page for details. And the fun’s not over today. Rather than run 24 hours consecutively, #24HOP is now broken down into 12-hours over two days, so check out the schedule to see if there’s a session that interests you and fits your schedule. I’m pleased to announce that my business colleague Erika Bakse ( Blog | Twitter) will be presenting on Day 2 – her debut presentation for a PASS event. (And I’m also pleased to say she’s my daughter!) Multidimensional Thinking: The Presentation My contribution to this lineup of terrific speakers was Multidimensional Thinking. Here’s the abstract: “Whether you’re developing Analysis Services cubes or creating PowerPivot workbooks, you need to get into a multidimensional frame of mind to produce a model that best enables users to answer their business questions on their own. Many database professionals struggle initially with multidimensional models because the data modeling process is much different than the one they use to produce traditional, third normal form databases. In this session, I’ll introduce you to the terminology of multidimensional modeling and step through the process of translating business requirements into a viable model.” If you watched the presentation and want a copy of the slides, you can download a copy here. And you’re welcome to download the slides even if you didn’t watch the presentation, but they’ll make more sense if you did! Kimball All the Way There’s only so much I can cover in the time allotted, but I hope that I succeeded in my attempt to build a foundation that prepares you for starting out in business intelligence. One of my favorite resources that will get into much more detail about all kinds of scenarios (well beyond the basics!) is The Data Warehouse Toolkit (Second Edition) by Ralph Kimball. Anything from Kimball or the Kimball Group is worth reading. Kimball material might take reading and re-reading a few times before it makes sense. From my own experience, I found that I actually had to just build my first data warehouse using dimensional modeling on faith that I was going the right direction because it just didn’t click with me initially. I’ve had years of practice since then and I can say it does get easier with practice. The most important thing, in my opinion, is that you simply must prototype a lot and solicit user feedback, because ultimately the model needs to make sense to them. They will definitely make sure you get it right! Schema Generation One question came up after the presentation about whether we use SQL Server Management Studio or Business Intelligence Development Studio (BIDS) to build the tables for the dimensional model. My answer? It really doesn’t matter how you create the tables. Use whatever method that you’re comfortable with. But just so happens that it IS possible to set up your design in BIDS as part of an Analysis Services project and to have BIDS generate the relational schema for you. I did a Webcast last year called Building a Data Mart with Integration Services that demonstrated how to do this. Yes, the subject was Integration Services, but as part of that presentation, I showed how to leverage Analysis Services to build the tables, and then I showed how to use Integration Services to load those tables. I blogged about this presentation in September 2010 and included downloads of the project that I used. In the blog post, I explained that I missed a step in the demonstration. Oops. Just as an FYI, there were two more Webcasts to finish the story begun with the data – Accelerating Answers with Analysis Services and Delivering Information with Reporting Services. If you want to just cut to the chase and learn how to use Analysis Services to build the tables, you can see the Using the Schema Generation Wizard topic in Books Online.

    Read the article

  • Geek City: What gets logged for SELECT INTO operations?

    - by Kalen Delaney
    Last week, I wrote about logging for index rebuild operations. I wanted to publish the result of that testing as soon as I could, because that dealt with a specific question I was trying to answer. However, I actually started out my testing by looking at the logging that was done for a different operation, and ending up generating some new questions for myself. Before I starting testing the index rebuilds, I thought I would just get warmed up by observing the logging for SELECT INTO. I thought I...(read more)

    Read the article

  • Windows Azure Use Case: Fast Acquisitions

    - by BuckWoody
    This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx  Description: Many organizations absorb, take over or merge with other organizations. In these cases, one of the most difficult parts of the process is the merging or changing of the IT systems that the employees use to do their work, process payments, and even get paid. Normally this means that the two companies have disparate systems, and several approaches can be used to have the two organizations use technology between them. An organization may choose to retain both systems, and manage them separately. The advantage here is speed, and keeping the profit/loss sheets separate. Another choice is to slowly “sunset” or stop using one organization’s system, and cutting to the other system immediately or at a later date. Although a popular choice, one of the most difficult methods is to extract data and processes from one system and import it into the other. Employees at the transitioning system have to be trained on the new one, the data must be examined and cleansed, and there is inevitable disruption when this happens. Still another option is to integrate the systems. This may prove to be as much work as a transitional strategy, but may have less impact on the users or the balance sheet. Implementation: A distributed computing paradigm can be a good strategic solution to most of these strategies. Retaining both systems is made more simple by allowing the users at the second organization immediate access to the new system, because security accounts can be created quickly inside an application. There is no need to set up a VPN or any other connections than just to the Internet. Having the users stop using one system and start with the other is also simple in Windows Azure for the same reason. Extracting data to Azure holds the same limitations as an on-premise system, and may even be more problematic because of the large data transfers that might be required. In a distributed environment, you pay for the data transfer, so a mixed migration strategy is not recommended. However, if the data is slowly migrated over time with a defined cutover, this can be an effective strategy. If done properly, an integration strategy works very well for a distributed computing environment like Windows Azure. If the Azure code is architected as a series of services, then endpoints can expose the service into and out of not only the Azure platform, but internally as well. This is a form of the Hybrid Application use-case documented here. References: Designing for Cloud Optimized Architecture: http://blogs.msdn.com/b/dachou/archive/2011/01/23/designing-for-cloud-optimized-architecture.aspx 5 Enterprise steps for adopting a Platform as a Service: http://blogs.msdn.com/b/davidmcg/archive/2010/12/02/5-enterprise-steps-for-adopting-a-platform-as-a-service.aspx?wa=wsignin1.0

    Read the article

  • Anatomy of a .NET Assembly - PE Headers

    - by Simon Cooper
    Today, I'll be starting a look at what exactly is inside a .NET assembly - how the metadata and IL is stored, how Windows knows how to load it, and what all those bytes are actually doing. First of all, we need to understand the PE file format. PE files .NET assemblies are built on top of the PE (Portable Executable) file format that is used for all Windows executables and dlls, which itself is built on top of the MSDOS executable file format. The reason for this is that when .NET 1 was released, it wasn't a built-in part of the operating system like it is nowadays. Prior to Windows XP, .NET executables had to load like any other executable, had to execute native code to start the CLR to read & execute the rest of the file. However, starting with Windows XP, the operating system loader knows natively how to deal with .NET assemblies, rendering most of this legacy code & structure unnecessary. It still is part of the spec, and so is part of every .NET assembly. The result of this is that there are a lot of structure values in the assembly that simply aren't meaningful in a .NET assembly, as they refer to features that aren't needed. These are either set to zero or to certain pre-defined values, specified in the CLR spec. There are also several fields that specify the size of other datastructures in the file, which I will generally be glossing over in this initial post. Structure of a PE file Most of a PE file is split up into separate sections; each section stores different types of data. For instance, the .text section stores all the executable code; .rsrc stores unmanaged resources, .debug contains debugging information, and so on. Each section has a section header associated with it; this specifies whether the section is executable, read-only or read/write, whether it can be cached... When an exe or dll is loaded, each section can be mapped into a different location in memory as the OS loader sees fit. In order to reliably address a particular location within a file, most file offsets are specified using a Relative Virtual Address (RVA). This specifies the offset from the start of each section, rather than the offset within the executable file on disk, so the various sections can be moved around in memory without breaking anything. The mapping from RVA to file offset is done using the section headers, which specify the range of RVAs which are valid within that section. For example, if the .rsrc section header specifies that the base RVA is 0x4000, and the section starts at file offset 0xa00, then an RVA of 0x401d (offset 0x1d within the .rsrc section) corresponds to a file offset of 0xa1d. Because each section has its own base RVA, each valid RVA has a one-to-one mapping with a particular file offset. PE headers As I said above, most of the header information isn't relevant to .NET assemblies. To help show what's going on, I've created a diagram identifying all the various parts of the first 512 bytes of a .NET executable assembly. I've highlighted the relevant bytes that I will refer to in this post: Bear in mind that all numbers are stored in the assembly in little-endian format; the hex number 0x0123 will appear as 23 01 in the diagram. The first 64 bytes of every file is the DOS header. This starts with the magic number 'MZ' (0x4D, 0x5A in hex), identifying this file as an executable file of some sort (an .exe or .dll). Most of the rest of this header is zeroed out. The important part of this header is at offset 0x3C - this contains the file offset of the PE signature (0x80). Between the DOS header & PE signature is the DOS stub - this is a stub program that simply prints out 'This program cannot be run in DOS mode.\r\n' to the console. I will be having a closer look at this stub later on. The PE signature starts at offset 0x80, with the magic number 'PE\0\0' (0x50, 0x45, 0x00, 0x00), identifying this file as a PE executable, followed by the PE file header (also known as the COFF header). The relevant field in this header is in the last two bytes, and it specifies whether the file is an executable or a dll; bit 0x2000 is set for a dll. Next up is the PE standard fields, which start with a magic number of 0x010b for x86 and AnyCPU assemblies, and 0x20b for x64 assemblies. Most of the rest of the fields are to do with the CLR loader stub, which I will be covering in a later post. After the PE standard fields comes the NT-specific fields; again, most of these are not relevant for .NET assemblies. The one that is is the highlighted Subsystem field, and specifies if this is a GUI or console app - 0x20 for a GUI app, 0x30 for a console app. Data directories & section headers After the PE and COFF headers come the data directories; each directory specifies the RVA (first 4 bytes) and size (next 4 bytes) of various important parts of the executable. The only relevant ones are the 2nd (Import table), 13th (Import Address table), and 15th (CLI header). The Import and Import Address table are only used by the startup stub, so we will look at those later on. The 15th points to the CLI header, where the CLR-specific metadata begins. After the data directories comes the section headers; one for each section in the file. Each header starts with the section's ASCII name, null-padded to 8 bytes. Again, most of each header is irrelevant, but I've highlighted the base RVA and file offset in each header. In the diagram, you can see the following sections: .text: base RVA 0x2000, file offset 0x200 .rsrc: base RVA 0x4000, file offset 0xa00 .reloc: base RVA 0x6000, file offset 0x1000 The .text section contains all the CLR metadata and code, and so is by far the largest in .NET assemblies. The .rsrc section contains the data you see in the Details page in the right-click file properties page, but is otherwise unused. The .reloc section contains address relocations, which we will look at when we study the CLR startup stub. What about the CLR? As you can see, most of the first 512 bytes of an assembly are largely irrelevant to the CLR, and only a few bytes specify needed things like the bitness (AnyCPU/x86 or x64), whether this is an exe or dll, and the type of app this is. There are some bytes that I haven't covered that affect the layout of the file (eg. the file alignment, which determines where in a file each section can start). These values are pretty much constant in most .NET assemblies, and don't affect the CLR data directly. Conclusion To summarize, the important data in the first 512 bytes of a file is: DOS header. This contains a pointer to the PE signature. DOS stub, which we'll be looking at in a later post. PE signature PE file header (aka COFF header). This specifies whether the file is an exe or a dll. PE standard fields. This specifies whether the file is AnyCPU/32bit or 64bit. PE NT-specific fields. This specifies what type of app this is, if it is an app. Data directories. The 15th entry (at offset 0x168) contains the RVA and size of the CLI header inside the .text section. Section headers. These are used to map between RVA and file offset. The important one is .text, which is where all the CLR data is stored. In my next post, we'll start looking at the metadata used by the CLR directly, which is all inside the .text section.

    Read the article

  • How much should I charge an hour for freelance iOS development?

    - by Tyler Bell
    I am a fairly competent developer who already holds a job developing iOS applications. This job is through the university which I attend. The producer of the apps that I develop is always trying to set me up with some freelance opportunities to get my work out there and to get me some more work/experience. What is a reasonable price to charge (either hourly or per app)? I'd be working by myself, on my own equipment, from start to finish in the design process. Just wondering what a reasonable price was...I've heard up to $30? Thanks

    Read the article

  • How do you make comp.sci students and future programmers aware of the various software licenses and the nuances of it ?

    - by Samyak Bhuta
    To be specific How would you include it as part of curriculum ? Would it be too boring to just introduce them as a pure law subject ? Are there any course structure available or can we derive one ? What are the books that could be used ? I would like to see that - after going through the course - candidate is well aware of "what software licenses are and what they are good for". Various implications of not knowing it in it's proper sense. What licenses they should use for their own code. What to consider when they are trying to use certain libraries or tools in their project and gauge risks/rewards associated with it. The idea is to let them make informed choices when they are professionals/practitioners in field of programming and not make them substitute for a lawyer or even a paralegal who is going to fight the case or draft things.

    Read the article

  • Is Agile the new micromanagement?

    - by Smith James
    Hi, This question has been cooking in my head for a while so I wanted to ask those who are following agile/scrum practices in their development environments. My company has finally ventured into incorporating agile practices and has started out with a team of 4 developers in an agile group on a trial basis. It has been 4 months with 3 iterations and they continue to do it without going fully agile for the rest of us. This is due to the fact that management's trust to meet business requirements with a quite a bit of ad hoc type request from high above. Recently, I talked to the developers who are part of this initiative; they tell me that it's not fun. They are not allowed to talk to other developers by their Scrum master and are not allowed to take any phone calls in the work area (which maybe fine to an extent). For example, if I want to talk to my friend for kicks who is in the agile team, I am not allowed without the approval of the Scrum master; who is sitting right next to the agile team. The idea of all this or the agile is to provide a complete vacuum for agile developers from any interruptions and to have them put in good 6+ productive hours. Well, guys, I am no agile guru but what I have read Yahoo agile rollout document and similar for other organizations, it gives me a feeling that agile is not cheap. It require resources and budget to instill agile into the teams and correct issue as they arrive to put them back on track. For starters, it requires training for developers and coaching for managers and etc, etc... The current Scrum master was a manager who took a couple days agile training class paid by the management is now leading this agile team. I have also heard in the meeting that agile manifesto doesn't dictate that agile is not set in stones and is customized differently for each company. Well, it all sounds good and reason. In conclusion, I always thought the agile was supposed to bring harmony in the development teams which results in happy developers. However, I am getting a very opposite feeling when talking to the developers in the agile team. They are unhappy that they cannot talk anything but work, sitting quietly all day just working, and they feel it's just another way for management to make them work more. Tell me please, if this is one of the examples of good practices used for the purpose of selfish advantage for more dollars? Or maybe, it's just us the developers like me and this agile team feels that they don't like to work in an environment where they only breathe work because they are at work. Thanks. Edit: It's a company in healthcare domain that has offices across US, but we're in Texas. It definitely feels like a cowboy style agile which makes me really not wanting to go for agile at all, esp at my current company. All of it has to do with the management being completely cheap. Cutting out expensive coffee for cheaper version, emphasis on savings and being productive while staying as lean as possible. My feeling is that someone in the management behind the door threw out this idea, that agile makes you produce more so we can show our bosses we're producing more with the same headcount. Or, maybe, it will allow us to reduce headcount if that's the case. EDITED: They are having their 5 min daily meeting. But not allowed to chat or talk with someone outside of their team. All focus is on work.

    Read the article

  • What have we lost from computers 20 years ago

    - by Martin Beckett
    Following the theme of - I can't believe computers used floppy disks and did you have debuggers 20 years ago - questions (and because even putting my age in hex I don't look young anymore). What have we lost from computers 20years ago? VMS versioning file system. Make a mistake, no problem myprog.cpp;5 is still there. Home computers that powered up to give you a wordprocessor or a basic prompt in <0.5sec User ports, serial ports and ADCs that you could control things with - without an arduino APIs that actually did what they said (even DOS interrupts), without having to guess and experiment your way through layers of frameworks.

    Read the article

  • Skills for RAD developer.

    - by Janis Peisenieks
    I am about to embark on an exquisite journey. I have applied for an event, where in span of 48 straight hours, strangers meet, throw around some great ideas, decide on teams, and make a working prototype of IT project. All within 48 hours. I anticipate, that this will be very skill and ability intensive experience, and I want to be prepared. Since i will need to develop my part of the code quickly, I have a following question: What would be the most needed skills for these 48 hours? I do know, that things like proper documentation, version control and such are pretty important for a full fledged application/program/web development, but for this span of frantic coding? Background: I am a web developer, so answers applicable to web development would be more appreciated than others.

    Read the article

  • Is the "One Description Table to rule them all" approch good?

    - by DavRob60
    Long ago, I worked (as a client) with a software which use a centralized table for it's codified element. Here, as far as I remember, how the table look like : Table_Name (PK) Field_Name (PK) Code (PK) Sort_Order Description So, instead of creating a table every time they need a codified field, they where just adding row in this table with the new Table_Name and Field_Name. I'm sometime tempted to use this pattern in some database I design, but I have resisted to this as from now, I think there's something wrong with this, but I cannot put the finger on it. It is just because you land with some of the structure logic within the Data or something else?

    Read the article

  • ASP.NET MVC 3 (C#) Software Architecture

    - by ryanzec
    I am starting on a relatively large and ambitious ASP.NET MVC 3 project and just thinking about the best way to organize my code. The project is basically going to be a general management system that will be capable of supporting any type management system whether it be a blogging system, cms, reservation system, wikis, forums, project management system, etc…, each of them being just a separate 'module'. You can read more about it on my blog posted here : http://www.ryanzec.com/index.php/blog/details/8 (forgive me, the style of the site kinda sucks). For those who don't want to read the long blog post the basic idea is that the core system itself is nothing more than a users system with an admin interface to manage the users system. Then you just add on module as you need them and the module I will be creating is a simple blog post to test it out before I move on to the big module which is a project management system. Now I am just trying to think of the best way to structure this so that it is easy for users to add in there own modules but easy for me to update to core system without worrying about the user modifying the core code. I think the ideal way would be to have a number of core projects that user is specifically told not to modify otherwise the system may become unstable and future updates would not work. When the user wants to add in there own modules, they would just add in a new project (or multiple projects). The thing is I am not sure that it is even possible to use multiple projects all with their own controllers, razor view template, css, javascript, etc... in one web application. Ideally each module would have some of it own razor view templates, css, javascript, image files and also need access to some of the core razor view templates, css, javascript, image files which would is in a separate project. It is possible to have 1 web application run off of controllers, razor view templates, css, javascript, image files that are store in multiple projects? Is there a better was to structure this to allow the user to easily add in module with having to modify the core code?

    Read the article

  • Are `break` and `continue` bad programming practices?

    - by Mikhail
    My boss keeps mentioning nonchalantly that bad programmers use break and continue in loops. I use them all the time because they make sense; let me show you the inspiration: function verify(object) { if (object->value < 0) return false; if (object->value > object->max_value) return false; if (object->name == "") return false; ... } The point here is that first the function checks that the conditions are correct, then executes the actual functionality. IMO same applies with loops: while (primary_condition) { if (loop_count > 1000) break; if (time_exect > 3600) break; if (this->data == "undefined") continue; if (this->skip == true) continue; ... } I think this makes it easier to read & debug; but I also don't see a downside. Please comment.

    Read the article

  • What makes a good developer / system documentation?

    - by deamon
    Much time is wasted to get new developers started with existing software systems, because there is no good documenation. But what makes a system documentation good? One thing is a good API documentation like the Java API doc, but how to transfer the "bigger picture" and other things that cannot be placed in the API doc? One constraint is that it should not be to hard and time consuming to write the docs, because that is one reason why it is omitted so often. So, what makes documentation good?

    Read the article

  • Olympic clock stops after a few hours: can this even be a software problem? [closed]

    - by mvexel
    I fail to understand how something uncomplicated as a countdown clock can fail - to much public humiliation of sponsor and renowned clock maker Omega - after only a few hours of operation. The clock, which was 'developed by our experts and fully tested' according to a spokesperson who goes on to say that is 'not immediately apparent what has caused the problem'. Can this even be a software problem? What has gone wrong here?

    Read the article

  • Duplication in parallel inheritance hierarchies

    - by flamingpenguin
    Using an OO language with static typing (like Java), what are good ways to represent the following model invariant without large amounts of duplication. I have two (actually multiple) flavours of the same structure. Each flavour requires its own (unique to that flavour data) on each of the objects within that structure as well as some shared data. But within each instance of the aggregation only objects of one (the same) flavour are allowed. FooContainer can contain FooSources and FooDestinations and associations between the "Foo" objects BarContainer can contain BarSources and BarDestinations and associations between the "Bar" objects interface Container() { List<? extends Source> sources(); List<? extends Destination> destinations(); List<? extends Associations> associations(); } interface FooContainer() extends Container { List<? extends FooSource> sources(); List<? extends FooDestination> destinations(); List<? extends FooAssociations> associations(); } interface BarContainer() extends Container { List<? extends BarSource> sources(); List<? extends BarDestination> destinations(); List<? extends BarAssociations> associations(); } interface Source { String getSourceDetail1(); } interface FooSource extends Source { String getSourceDetail2(); } interface BarSource extends Source { String getSourceDetail3(); } interface Destination { String getDestinationDetail1(); } interface FooDestination extends Destination { String getDestinationDetail2(); } interface BarDestination extends Destination { String getDestinationDetail3(); } interface Association { Source getSource(); Destination getDestination(); } interface FooAssociation extends Association { FooSource getSource(); FooDestination getDestination(); String getFooAssociationDetail(); } interface BarAssociation extends Association { BarSource getSource(); BarDestination getDestination(); String getBarAssociationDetail(); }

    Read the article

  • Best book for developer who needs to learn software fundamentals

    - by tharrison
    I have recently inherited a team of developers, none of whom really have any programming experience. Some are really bright, and are learning on their own. I am looking for one or two books that are practical but show the core practices of professional development: Structure, OO, naming, DRY, why elegance matters, etc. When I was learning, I loved Code Complete, and Programming Pearls, but they are dated now. Any recommendations for good books that could be used in tandem with a language specific book to help understand? Thanks in advance!

    Read the article

  • What is the value of checking in failing unit tests?

    - by Adam W.
    While there are ways of keeping unit tests from being executed, what is the value of checking in failing unit tests? I will use a simple example: Case Sensitivity. The current code is case sensitive. A valid input into the method is "Cat" and it would return an enum of Animal.Cat. However, the desired functionality of the method should not be case sensitive. So if the method described was passed "cat" it could possibly return something like Animal.Null instead of Animal.Cat and the unit test would fail. Though a simple code change would make this work, a more complex issue may take weeks to fix, but identifying the bug with a unit test could be a less complex task. The application currently being analyzed has 4 years of code that "works". However, recent discussions regarding unit tests have found flaws in the code. Some just need explicit implementation documentation (ex. case sensitive or not), or code that does not execute the bug based on how it is currently called. But unit tests can be created executing specific scenarios that will cause the bug to be seen and are valid inputs. What is the value of checking in unit tests that exercise the bug until someone can get around to fixing the code? Should this unit test be flagged with ignore, priority, category etc, to determine whether a build was successful based on tests executed? Eventually the unit test should be created to execute the code once someone fixes it. On one hand it shows that identified bugs have not been fixed. On the other, there could be hundreds of failed unit tests showing up in the logs and weeding through the ones that should fail vs. failures due to a code check-in would be difficult to find.

    Read the article

  • What issues are there for doing freelance work?

    - by Telos
    I'm considering doing some contract work on the side of my normal job. I know that it will kill my free time, but I figure I can control when I'm doing projects and then get a little extra money or even eventually make it my full time job. But as I've never done this before, I'm wondering what issues people face to do this kind of work. For instance: how do you find customers? What difficulties do you normally face on a project? How do you deal with projects that are too large for one programmer to effectively complete? What about projects that need other skill sets (for instance web design for a web app?)

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >