Search Results

Search found 28163 results on 1127 pages for 'project vba'.

Page 729/1127 | < Previous Page | 725 726 727 728 729 730 731 732 733 734 735 736  | Next Page >

  • What stands out on a Juniors CV

    - by DeanMc
    Hi all, I am looking for advice, hopefully from people more experienced that me. I am going to start applying for junior positions for .net development in the summer. At the moment the only thing on my CV is a project I have done for Windows Phone 7 called sprout sms. It allows the user to send webtext as provided by Irish Operators. The app is doing well and is top ten in my local marketplace (Ireland). By trade I am a salesman and that is the extent of most of my employment. I haven't been to college and due to financial commitments I would not be able to go down the road of full time education. I have kept up to date with various .net related tech in a junior capacity and am looking to now change careers. What I am look to see is what stands out on a juniors CV. My lack of education stands against me so I am looking to offset this with practical experience. I am open to all suggestions and from the end of this month I am free to pursue "notches" which will make my CV stand out. So in short if you where hiring a Junior, what would you like to see that would make you take a second look or request an interview? NOTE: I do fear this is a subjective subject, rather than debate what is the best items to have on a Juniors CV I would like to concentrate on what info you would give to a junior who is looking to apply for a job this year. Thanks to all that respond.

    Read the article

  • Report Books and Parameters with Telerik Reporting

    Telerik Reporting provides a simple, yet powerful, component called the ReportBook that allows multiple reports to be combined into one. Doing so makes displaying, printing, and exporting multiple reports a much simpler task for end users. Providing this type of functionality does leave one question though, "How are report parameters handled?" This entry will focus on providing the answers to that simple question. Click here to download the sample code so you can follow along. Creating a ReportBook ReportBooks are supported in all three environments the ReportViewer supports, WinForms, Silverlight, and ASP.NET. Adding a ReportBook to each of these environments is very similar. This example focuses on using a ReportBook with the WinForms ReportViewer. Create a WinForms Application Add a reference to the project containing the reports. Drag a ReportViewer from the ToolBox into the designer. Drag a ReportBook from the ToolBox into the designer. A dialog will be displayed asking for the reports to be included in the ReportBook....Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Oracle University Neue Kurse (Week 42)

    - by rituchhibber
    In der letzten Woche wurden von Oracle University folgende neue Kurse (bzw. Versionen davon) veröffentlicht: Database Oracle Enterprise Manager Cloud Control 12c: Install & Upgrade (Training On Demand) MySQL Performance Tuning (Training On Demand) Oracle Database 11g: New Features for Administrators (Training On Demand - In German) Oracle Database 11g: Professioneller Einstieg in SQL (Training On Demand - In German) Fusion Middleware Oracle GoldenGate 11g Management Pack: Overview (1 day - In German) Oracle GoldenGate 11g Fundamentals for Oracle (4 days) Oracle WebCenter Content 11g: Site Studio Essentials (5 days) Oracle WebCenter Portal 11g: Build Portals with Spaces (3 days) Business Intelligence Oracle BI 11g R1: Create Analyses and Dashboards (4 days) SOA & BPM SOA Adoption and Architecture Fundamentals (3 Days) eBusiness Suite R12 Oracle Using and Maintaining Approvals Management - Self-Study Course R12 Oracle HRMS Advanced Benefits Fundamentals - Self-Study Course WebLogic Oracle WebLogic Server 11g: Monitor and Tune Performance (Training On Demand) Financial Oracle Project Financial Planning 11.1.2: Create Projects ( 3 days) Tuxedo Oracle Tuxedo 12c: Application Administration (5 days) Java Java SE 7: The Platform Evolves - Self-Study Course Primevera Primavera Client/Server Partner Trainer Course - Self-Study Course Primavera Progress Reporter 8.2 - Self-Study Course Identity Management Oracle Identity Manager 11g: Essentials (4 days - In German) Wenn Sie weitere Einzelheiten erfahren oder sich über Kurstermine informieren möchten, wenden Sie sich einfach an Ihr lokales Oracle University-Team in.

    Read the article

  • What Technology can Render Medium Scale 3d Environments in a Web-Browser

    - by JakeM
    I intend to make a web application that displays 3d environments that can be navigated by dragging(with a finger or mouse depending on the platform). The web app will render 3d environments of development sites including contours, water pipeline locations, buildings etc. I am trying to decide what technology/libraries to use that will create a web-app that will work on Android-Web-Browser, iOS-Safari, IE9, Safari, Firefox and Chrome. And also what technology will provide speed in development. I understand that this is 'asking for my cake and eating it too'/'asking for the moon' but I don't know all the technologies out there - so there may be advanced libraries that can render 3d environments across many web-browsers including the main smart phone ones and I dont know of them. The 3d rendering would not be highly detailed buildings or water with effects, but rather simple 3d representations of these objects. The environment would be navigable by dragging around and you could view the landscape in layers(view only contour lines, view only underground pipelines, view only sewerage pipes, etc.). Are there any 3d libraries for web-browsers out there? Is there a way to run OpenGL(or OpenGL ES) through a webbrowser? What technology would you use if you were making this kind of app/web app that should work on desktop Windows, Android, iOS and WindowsPhone? Is there any technology I have failed to mention that would be good for this kind of project? I am tending towards a Browser Driven Web App because I get that cross platform ability(where it even works on linux and MacOS by using compatible web-browsers). Also I know of CSS3 transforms that can create cubes that can rotate in 3d space(NOTE only works for WebKit browsers - so no IE :( ). But I don't know if CSS3 is robust enough to render whole 3d environments? Do you think it could? Maybe I could use HTML5 canvas's for this? Can Google maps create custom 3d maps?

    Read the article

  • Is defining every method/state per object in a series of UML diagrams representative of MDA in general?

    - by Max
    I am currently working on a project where we use a framework that combines code generation and ORM together with UML to develop software. Methods are added to UML classes and are generated into partial classes where "stuff happens". For example, an UML class "Content" could have the method DeleteFromFileSystem(void). Which could be implemented like this: public partial class Content { public void DeleteFromFileSystem() { File.Delete(...); } } All methods are designed like this. Everything happens in these gargantuan logic-bomb domain classes. Is this how MDA or DDD or similar usually is done? For now my impression of MDA/DDD (which this has been called by higherups) is that it severely stunts my productivity (everything must be done The Way) and that it hinders maintenance work since all logic are roped, entrenched, interspersed into the mentioned gargantuan bombs. Please refrain from interpreting this as a rant - I am merely curious if this is typical MDA or some sort of extreme MDA UPDATE Concerning the example above, in my opinion Content shouldn't handle deleting itself as such. What if we change from local storage to Amazon S3, in that case we would have to reimplement this functionality scattered over multiple places instead of one single interface which we can provide a second implementation for.

    Read the article

  • Is knowledge of hacking mechanisms required for an MMO?

    - by Gabe
    Say I was planning on, in the future (not now! There is alot I need to learn first) looking to participating in a group project that was going to make a massively multiplayer online game (mmo), and my job would be the networking portion. I'm not that familiar with network programming (I've read a very basic book on PHP, MYSQL and I messed around a bit with WAMP). In the course of my studying of PHP and MYSQL, should I look into hacking? Hacking as in port scanning, router hacking, etc. In MMOs people are always trying to cheat, bots and such, but the worst scenario would be having someone hack the databases. This is just my conception of this, I really don't know. I do however understand networking fairly well, like subnetting/ports/IP's (local/global)/etc. In your professional opinion, (If you understand the topic, enlighten me) Should I learn about these things in order to counter the possibility of this happening? Also, out of the things I mentioned (port scanning, router hacking) Is there anything else that pertains to hacking that I should look into? I'm not too familiar with the malicious/Security aspects of Networking. And a note: I'm not some kid trying to learn how to hack. I just want to learn as much as possible before I go to college, and I really need to know if I need to study this or not.

    Read the article

  • Cork Board Solution to tack things up on top or to the side of a monitor

    - by Bela
    I'm trying to find some sort of physical product that would either go on the top or the side of an lcd monitor and give me space to tape/push-pin/post it note things for myself. In my head I am picturing an extra space above your monitor 6 inches tall that lets you tape/push pin things up in front of you. For random notes and things I want to keep track of, having them on the top/side of my monitor would keep the space on my desk itself clear, and they would be closer to my field of vision. Does something like this exist? Do I need to rig up something myself? EDIT This is the closest thing I can find so far http://www.unplggd.com/unplggd/diy-project/reverse-engineer-how-to-feel-up-your-monitor-048251

    Read the article

  • Batch OCR for many PDF files (not already OCRed) ?

    - by David
    Hello, I use Google Desktop Search (I am on Vista) and not all my PDF files are recognized in my archive folder. It is normal as "PDF files that contain scanned images" are not indexed (http://desktop.google.com/support/bin/answer.py?hl=en&answer=90651) So I would like to OCR many of my PDF files that are not already OCRed. My goal : I give the program a folder and it search alone in the subfolders the PDF files that need to be converted into PDF-OCRed files. Note: In the past, if a PDF file was password protected, I removed the password with another batch (paying) tool: verypdf.com "pwdremover" Any (not too much expensive) idea ? I already tried : Finereader 6 pro on xp at the time, but there was no batch processor included... Paperfile paperfile.net which uses Tesseract code.google.com/p/tesseract-ocr/. But the OCR is only PDF to text, not PDF to PDF! There is also another project code.google.com/p/ocropus Thanks in advance ;)

    Read the article

  • Java game object pool management

    - by Kenneth Bray
    Currently I am using arrays to handle all of my game objects in the game I am making, and I know how terrible this is for performance. My question is what is the best way to handle game objects and not hurt performance? Here is how I am creating an array and then looping through it to update the objects in the array: public static ArrayList<VboCube> game_objects = new ArrayList<VboCube>(); /* add objects to the game */ while (!Display.isCloseRequested() && !Keyboard.isKeyDown(Keyboard.KEY_ESCAPE)) { for (int i = 0; i < game_objects.size(); i++){ // draw the object game_objects.get(i).Draw(); game_objects.get(i).Update(); //world.updatePhysics(); } } I am not looking for someone to write me code for asset or object management, just point me into a better direction to get better performance. I appreciate the help you guys have provided me in the past, and I dont think I would be as far along with my project without the support on stack exchange!

    Read the article

  • Are there open source alternatives to Bitbucket, Github, Kiln, and similar DVCS browsing and management tools?

    - by Ryan Taylor
    I am aware of several tools/services that provide DVCS browsing and management such as Bitbucket, Github, Kiln, SCM-Manager and Rhodecode. However, the use case I am considering is one such that: Any source code must reside on an employers internal servers. The solution must be open source. It should provide a Bitbucket or Github like experience, including a project wiki, repository browsing and management, and social coding aspects such as code review. The solution should have mercurial support (if not support for other DVCSs). Of these, only SCM-Manager and RhodeCode come close as they can be installed on your own servers and are open source. However they do not have the Bitbucket or Github experience. There is no issue tracker or wiki and the UI, while functional, is not up to par with Github or Bitbucket. I can get close with Trac or Redmine with their repository browsers but unfortunately they do not have any repository management capabilities. Are there other open source tools out there that would provide a similar experience to Bitbucket, Github or Kiln?

    Read the article

  • Is it useful to unit test methods where the only logic is guards?

    - by Vaccano
    Say I have a method like this: public void OrderNewWidget(Widget widget) { if ((widget.PartNumber > 0) && (widget.PartAvailable)) { WigdetOrderingService.OrderNewWidgetAsync(widget.PartNumber); } } I have several such methods in my code (the front half to an async Web Service call). I am debating if it is useful to get them covered with unit tests. Yes there is logic here, but it is only guard logic. (Meaning I make sure I have the stuff I need before I allow the web service call to happen.) Part of me says "sure you can unit test them, but it is not worth the time" (I am on a project that is already behind schedule). But the other side of me says, if you don't unit test them, and someone changes the Guards, then there could be problems. But the first part of me says back, if someone changes the guards, then you are just making more work for them (because now they have to change the guards and the unit tests for the guards). For example, if my service assumes responsibility to check for Widget availability then I may not want that guard any more. If it is under unit test, I have to change two places now. I see pros and cons in both ways. So I thought I would ask what others have done.

    Read the article

  • How do I structure code and builds for continuous delivery of multiple applications in a small team?

    - by kingdango
    Background: 3-5 developers supporting (and building new) internal applications for a non-software company. We use TFS although I don't think that matters much for my question. I want to be able to develop a deployment pipeline and adopt continuous integration / deployment techniques. Here's what our source tree looks like right now. We use a single TFS Team Project. $/MAIN/src/ $/MAIN/src/ApplicationA/VSSOlution.sln $/MAIN/src/ApplicationA/ApplicationAProject1.csproj $/MAIN/src/ApplicationA/ApplicationAProject2.csproj $/MAIN/src/ApplicationB/... $/MAIN/src/ApplicationC $/MAIN/src/SharedInfrastructureA $/MAIN/src/SharedInfrastructureB My Goal (a pretty typical promotion pipeline) When a code change is made to a given application I want to be able to build that application and auto-deploy that change to a DEV server. I may also need to build dependencies on Shared Infrastructure Components. I often also have some database scripts or changes as well If developer testing passes I want to have an manually triggered but automated deploy of that build on a STAGING server where end-users will review new functionality. Once it's approved by end users I want to a manually triggered auto-deploy to production Question: How can I best adopt continuous deployment techniques in a multi-application environment? A lot of the advice I see is more single-application-specific, how is that best applied to multiple applications? For step 1, do I simply setup a separate Team Build for each application? What's the best approach to accomplishing steps 2 and 3 of promoting latest build to new environments? I've seen this work well with web apps but what about database changes

    Read the article

  • Developing Ext JS Charts in NetBeans IDE

    - by Geertjan
    I took my first tentative steps into the world of Ext JS charts today, in NetBeans IDE 7.4. Click to enlarge the image. I will make a screencast soon showing how charts such as the above can be created with NetBeans IDE and Ext JS. Setting up Ext JS is easy in NetBeans IDE because there's a JavaScript library browser, by means of which I can browse for the Ext JS libraries that I need and then NetBeans IDE sets up the project for me. The JavaScript code shown above comes directly from here: http://www.quizzpot.com/courses/learning-ext-js-3/articles/chart-series The index.html is as follows: <html> <head> <title>TODO supply a title</title> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width"> <link rel="stylesheet" href="js/libs/extjs/resources/css/ext-all.css"/> <script src="js/libs/ext-core/ext-core.js"></script> <script src="js/libs/extjs/adapter/ext/ext-base-debug.js"></script> <script src="js/libs/extjs/ext-all-debug.js"></script> <script src="app.js"></script> </head> <body> </body> </html> More info on Ext JS: http://docs.sencha.com/extjs/4.1.3/ By the way, quite a few other articles are out there on Ext JS and NetBeans IDE, such as these, which I will be learning from during the coming days: http://netbeans.dzone.com/extjs-rest-netbeans http://netbeans.dzone.com/articles/create-your-first-extjs-4 http://netbeans.dzone.com/articles/mixing-extjs-json-p-and-java

    Read the article

  • Good Scoop: The PeopleSoft/IBM Backstory

    - by [email protected]
    By Brian Dayton on April 12, 2010 11:15 AM Sometimes you're searching for something online and you find an unrelated, bonus nugget. Last week I stumbled across an interesting blog post from Chris Heller of a PeopleSoft consulting shop in San Ramon, CA called Grey Sparling. I don't know these guys. But Chris, who apparently used to work on the PeopleTools team, wrote a great article on a pre-acquisition, would-be deal between IBM and PeopleSoft that would have standardized PeopleSoft on IBM technology. The behind-the-scenes perspective is interesting. His commentary on the challenges that the company and PeopleSoft customers would have encountered if the deal had gone through was also interesting: · "No common ownership. It's hard enough to get large groups of people to work together when they work for the same company, but with two separate companies it is much, much harder. Even within Oracle, progress on Fusion applications was slow until Thomas Kurian took over Fusion applications in addition to Fusion middleware." · "No customer buy-in. PeopleSoft customers weren't asking for a conversion to WebSphere, so the fact that doing that could have helped PeopleSoft stay independent wouldn't have meant much to them, especially since the cost of moving to whatever a "PeopleSoft built on WebSphere" would have been significant." · "No executive buy-in. This is related to the previous point, but it's worth calling out separately. If Oracle had walked away and the deal with IBM had gone through, and PeopleSoft customers got put through the wringer as part of WebSphere move, all of the PeopleSoft project teams would be put in the awkward position of explaining to their management why these additional costs and headaches were happening. Essentially they would need to "sell" the partnership internally to their own management team. That's not a fun conversation to have." I'm not surprised that something like this was in the works. But I did find the inside scoop and Heller's perspective on the challenges particularly interesting. Especially the advantages of aligning development of applications and infrastructure development under one roof. Here's a link to the whole blog entry.

    Read the article

  • DIY Carbonator Creates Pop Rocks Like Fizzy Fruit [Science]

    - by Jason Fitzpatrick
    If you’ve ever sat around wishing that scientists would stop wasting time trying to solve pressing global problems and instead genetically engineer a bizarre but delicious hybrid of Pop Rocks candy and wholesome fruit, this mad scientist experiment is for you. Over at Evil Mad Scientist Laboratories they share a really fun weekend project. Contributor Rich Faulhaber was looking for a way to make eating fruit extra fun and science-infused for his kids. His solution? Build a homemade carbon dioxide injector that infuses fruit with carbonation. Having trouble imagining that? Envision a bowl of strawberries where every strawberry burst into a crazy flurry of strawberry flavor and champagne bubbles every time you bit into it. Fizzy fruit! Hit up the link below to see how he took pretty common parts: a C02 tank from a paint ball gun, a water filter canister from the hardware store, and other cheap and readily available parts (with the exception of the gas regulator which he suggests you shop garage sales and surplus stores to find a deal on), and combined them together to create a C02 fruit infuser. Hit up the link below to read more about his setup and the procedure he uses to infuse fruit with carbonation. The C02inator [Evil Mad Scientist Laboratories via Hack a Day] HTG Explains: What Are Character Encodings and How Do They Differ?How To Make Disposable Sleeves for Your In-Ear MonitorsMacs Don’t Make You Creative! So Why Do Artists Really Love Apple?

    Read the article

  • C++ and function pointers assessment: lack of inspiration

    - by OlivierDofus
    I've got an assessment to give to my students. It's about C++ and function pointers. Their skill is middle: it the first year of a programming school after bachelor. To give you something precise, here's a sample of a solution of one of 3 exercices they had to do in 30 minutes (the question was: "here's a version of a code that could be written with function pointers, write down the same thing but with function pointers"): typedef void (*fcPtr) (istream &); fcPtr ArrayFct [] = { Delete , Insert, Swap, Move }; void HandleCmd (const string && Cmd) { string AvalaibleCommands ("DISM"); string::size_type Pos; istringstream Flux (Cmd); char CodeOp; Flux >> CodeOp; Pos = AvalaibleCommands.find (toupper (CodeOp)); if (Pos != string::npos) { ArrayFct [Pos](Flux); } } Any idea where I could find some inspiration? Some of the students have understood the principles, even though it's very hard for them to write C++ code. I know them, I know they're clever, and I'm pretty sure they should be very good project managers. So, writing C++ code is not that important after all. Understanding is the most important part (IMHO). I'm wondering about maybe break the habits, and give half of the questions about the principle, or even better, give some sample in other language and ask them why it's better to use function pointers instead of classical programming (usually a big switch case). Any idea where I could look? Find some inspiration?

    Read the article

  • Server 2003 - Default installation of SharePoint throws javascript error 'undefined' is null or not an object

    - by Chris
    I have been handed a server that has a fresh installation of small business server 2003. SharePoint has been installed, and it is a completely fresh copy (no changes made) I know for a fact that everything was paid for so this isn't some dodgy copy. When I try to click any option in any of the floating menu's in IE 8, the browser asks me if I want to debug .. Error: 'undefined' is null or not an object for example, I got to Projects - and click on the sample project, then click delete on the floating menu that pops up and the error appears. This is brand new, I cannot believe I am having this problem! Has anyone come across this, is it an old problem with easy fix? All windows updates installed - see no updates for sharepoint? EDIT: This is WSS2 so I am going to upgrade to WSS3... http://www.microsoft.com/download/en/details.aspx?displaylang=en&id=14117 http://www.techrepublic.com/article/installing-windows-sharepoint-services-30-on-windows-server-2003/6176833 Hopefully this will resolve the problem

    Read the article

  • Hallmarks of a Professional PHP Programmer

    - by Scotty C.
    I'm a 19 year old student who really REALLY enjoys programming, and I'm hoping to glean from your years of experience here. At present, I'm studying PHP every chance I get, and have been for about 3 years, although I've never taken any formal classes. I'd love to some day be a programmer full time, and make a good career of it. My question to you is this: What do you consider to be the hallmarks or traits of a professional programmer? Mainly in the field of PHP, but other, more generalized qualifications are also more than welcome, as I think PHP is more of a hobbyist language and may not be the language of choice in the eyes of potential employers. Please correct me if I'm wrong. Above all, I don't want to wast time on something that isn't worth while. I'm currently feeling pretty confident in my knowledge of PHP as a language, and I know that I could build just about anything I need and have it "work", but I feel sorely lacking in design concepts and code structure. I can even write object oriented code, but in my personal opinion, that isn't worth a hill of beans if it isn't organized well. For this reason, I bought Matt Zandstra's book "PHP Objects, Patterns, and Practice" and have been reading that a little every day. Anyway, I'm starting to digress a little here, so back to the original question. What advice would you give to an aspiring programmer who wants to make an impact in this field? Also, on a side note, I've been working on a project with a friend of mine that would give a fairly good idea of where I'm at coding wise. I'm gonna give a link, I don't want anyone to feel as though I'm pushing or spamming here, so don't click it if you don't want to. But if you are interested on giving some feedback there as well, you can see the code on github. I'm known as The Craw there. https://github.com/PureChat/PureChat--Beta-/tree/

    Read the article

  • What platform to use for browser based turn based strategy game

    - by sunwukung
    I want to write a browser based strategy game that can be played by two players in separate locations. The game itself is predominantly turn based. To that end, I want to determine the correct platform on which to build this game. To prevent gamers "gaming" the system, the business logic needs to reside in the server. I could arguably use AJAX for a large part of the games functionality, but at two key points in the game loop, the opposing player can "counter" the current players move. In addition, when it's time for the players to swap, AJAX polling is likely to fall short, so it's starting to look like WebSockets is going to be a requirement to pull this off smoothly. So, the remaining question is regarding the back end. I'd kinda like to build this in Python/Flask - but this is primarily out of wanting to tackle a project with that language, not neccessarily because it's the appropriate tool for the job. The next most likely candidate has got to be NodeJS given it's (apparently) tighter integration with the WebSockets protocol. My question, then, is regarding the best platform on which to pursue this objective.

    Read the article

  • Recommendations for USB flash drive fast at writing small files

    - by Andrew Bainbridge
    I want a drive that I can be used as my work drive, storing a Subversion repo and sandbox for a small project. I'd also like it to be able to store a DVD rip. At the moment I've got a Super Talent pico-C 8gb. It's fast at reading and writing DVD rips, but the performance on small files (ie less than 4k) is utterly terrible (we're talking floppy disk speeds here). This Ars review measured a similar Super Talent drive and pretty much confirmed my measurements (take a look at the random write speeds on page 5). So, I'm looking for a 8gb or bigger drive that doesn't suck at read and write of small files and still has acceptable performance for very large files.

    Read the article

  • Enterprise Software Development with Java by Markus Eisele

    - by JuergenKress
    This is a blog about software development for the enterprise. It focuses on Java Enterprise Edition (J2EE/Java EE). Beside this, I blog about Oracle WebLogic and GlassFish Server and other technologies that hit my road. Java Mission Control 5.2 is Finally Here! Welcome 7u40! It has been a while since we last heard of this fancy little thing called Mission Control. It came all the way from JRockit and was renamed to Java Mission Control. This is one of the parts which literally survived the convergence strategy between HotSpot and JRockit. With today's Java SE 7 Update 40 you can actually use it again. Java Mission Control 5.2 The former JRockit Mission Control (JRMC) is now called Java Mission Control (JMC) and is a tools suite which includes tools to monitor, manage, profile, and eliminate memory leaks in your Java application without introducing the performance overhead normally associated with tools of this type. Up to today the 5.1 version was available within the Oracle HotSpot downloads which could only be received by paying customers from the Oracle Support Website. Todays release is the first release of Java Mission Control that is bundled with the Hotspot JDK! The convergence project between JRockit and Hotspot has reached critical mass. With the 7u40 release of the Hotspot JDK there is an equivalent amount of Flight Recorder information available from Hotspot. Read the full article here. WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: Markus Eisele,Java Development,WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Can a company use VPN to spy on me?

    - by orokusaki
    I'm about to work with a company on a development project, but they first need to set up a pretty complicated environment, and suggested they use VPN to work on my machine to do this. Should I be concerned that somebody can just watch me work? It would be embarrassing, if somebody could witness my work habits (e.g. Asking questions on SO and researching all day is part of my daily work regiment, and makes me feel like a noob, but it keeps me sharp. I also listen to conspiracy videos all day, and RadioLab podcasts, :). Is VPN going to introduce this possibility, and if so, is there a way around it? EDIT: Also, is there a way I can always tell when somebody is VPNed into my computer?

    Read the article

  • Is the development of CLI apps considered "backward"?

    - by user61852
    I am a DBA fledgling with a lot of experience in programming. I have developed several CLI, non interactive apps that solve some daily repetitive tasks or eliminate the human error from more complex albeit not so daily tasks. These tools are now part of our tool box. I find CLI apps are great because you can include them in an automated workflow. Also the Unix philosophy of doing a single thing but doing it well, and letting the output of a process be the input of another, is a great way of building a set of tools than would consolidate into an strategic advantage. My boss recently commented that developing CLI tools is "backward", or constitutes a "regression". I told him I disagreed, because most CLI tools that exist now are not legacy but are live projects with improved versions being released all the time. Is this kind of development considered "backwards" in the market? Does it look bad on a rèsumè? I also considered all solutions whether they are web or desktop, should have command line, non-interactive options. Some people consider this a waste of programming resources. Is this goal a worthy one in a software project?

    Read the article

  • PHP5 without PHP4 compatibility

    - by Serty Oan
    This is the opposite question of this one. My purpose is that there is no need when you write only PHP5 code to have PHP4 compatibility. It is not helpful at all and due to differences between object models in those two version, it must be penalizing on interpreter performances (or am I wrong ?). So my questions are : is it possible to recompile PHP5 without retrocompatibility with PHP4 or to disable it in any other way to gain performances ? if it is not possible, is there a project somewhere which aim that goal ?

    Read the article

  • The Stub Proto: Not Just For Stub Objects Anymore

    - by user9154181
    One of the great pleasures of programming is to invent something for a narrow purpose, and then to realize that it is a general solution to a broader problem. In hindsight, these things seem perfectly natural and obvious. The stub proto area used to build the core Solaris consolidation has turned out to be one of those things. As discussed in an earlier article, the stub proto area was invented as part of the effort to use stub objects to build the core ON consolidation. Its purpose was merely as a place to hold stub objects. However, we keep finding other uses for it. It turns out that the stub proto should be more properly thought of as an auxiliary place to put things that we would like to put into the proto to help us build the product, but which we do not wish to package or deliver to the end user. Stub objects are one example, but private lint libraries, header files, archives, and relocatable objects, are all examples of things that might profitably go into the stub proto. Without a stub proto, these items were handled in a variety of ad hoc ways: If one part of the workspace needed private header files, libraries, or other such items, it might modify its Makefile to reach up and over to the place in the workspace where those things live and use them from there. There are several problems with this: Each component invents its own approach, meaning that programmers maintaining the system have to invest extra effort to understand what things mean. In the past, this has created makefile ghettos in which only the person who wrote the makefiles feels confident to modify them, while everyone else ignores them. This causes many difficulties and benefits no one. These interdependencies are not obvious to the make, utility, and can lead to races. They are not obvious to the human reader, who may therefore not realize that they exist, and break them. Our policy in ON is not to deliver files into the proto unless those files are intended to be packaged and delivered to the end user. However, sometimes non-shipping files were copied into the proto anyway, causing a different set of problems: It requires a long list of exceptions to silence our normal unused proto item error checking. In the past, we have accidentally shipped files that we did not intend to deliver to the end user. Mixing cruft with valuable items makes it hard to discern which is which. The stub proto area offers a convenient and robust solution. Files needed to build the workspace that are not delivered to the end user can instead be installed into the stub proto. No special exceptions or custom make rules are needed, and the intent is always clear. We are already accessing some private lint libraries and compilation symlinks in this manner. Ultimately, I'd like to see all of the files in the proto that have a packaging exception delivered to the stub proto instead, and for the elimination of all existing special case makefile rules. This would include shared objects, header files, and lint libraries. I don't expect this to happen overnight — it will be a long term case by case project, but the overall trend is clear. The Stub Proto, -z assert_deflib, And The End Of Accidental System Object Linking We recently used the stub proto to solve an annoying build issue that goes back to the earliest days of Solaris: How to ensure that we're linking to the OS bits we're building instead of to those from the running system. The Solaris product is made up of objects and files from a number of different consolidations, each of which is built separately from the others from an independent code base called a gate. The core Solaris OS consolidation is ON, which stands for "Operating System and Networking". You will frequently also see ON called the OSnet. There are consolidations for X11 graphics, the desktop environment, open source utilities, compilers and development tools, and many others. The collection of consolidations that make up Solaris is known as the "Wad Of Stuff", usually referred to simply as the WOS. None of these consolidations is self contained. Even the core ON consolidation has some dependencies on libraries that come from other consolidations. The build server used to build the OSnet must be running a relatively recent version of Solaris, which means that its objects will be very similar to the new ones being built. However, it is necessarily true that the build system objects will always be a little behind, and that incompatible differences may exist. The objects built by the OSnet link to other objects. Some of these dependencies come from the OSnet, while others come from other consolidations. The objects from other consolidations are provided by the standard library directories on the build system (/lib, /usr/lib). The objects from the OSnet itself are supposed to come from the proto areas in the workspace, and not from the build server. In order to achieve this, we make use of the -L command line option to the link-editor. The link-editor finds dependencies by looking in the directories specified by the caller using the -L command line option. If the desired dependency is not found in one of these locations, ld will then fall back to looking at the default locations (/lib, /usr/lib). In order to use OSnet objects from the workspace instead of the system, while still accessing non-OSnet objects from the system, our Makefiles set -L link-editor options that point at the workspace proto areas. In general, this works well and dependencies are found in the right places. However, there have always been failures: Building objects in the wrong order might mean that an OSnet dependency hasn't been built before an object that needs it. If so, the dependency will not be seen in the proto, and the link-editor will silently fall back to the one on the build server. Errors in the makefiles can wipe out the -L options that our top level makefiles establish to cause ld to look at the workspace proto first. In this case, all objects will be found on the build server. These failures were rarely if ever caught. As I mentioned earlier, the objects on the build server are generally quite close to the objects built in the workspace. If they offer compatible linking interfaces, then the objects that link to them will behave properly, and no issue will ever be seen. However, if they do not offer compatible linking interfaces, the failure modes can be puzzling and hard to pin down. Either way, there won't be a compile-time warning or error. The advent of the stub proto eliminated the first type of failure. With stub objects, there is no dependency ordering, and the necessary stub object dependency will always be in place for any OSnet object that needs it. However, makefile errors do still occur, and so, the second form of error was still possible. While working on the stub object project, we realized that the stub proto was also the key to solving the second form of failure caused by makefile errors: Due to the way we set the -L options to point at our workspace proto areas, any valid object from the OSnet should be found via a path specified by -L, and not from the default locations (/lib, /usr/lib). Any OSnet object found via the default locations means that we've linked to the build server, which is an error we'd like to catch. Non-OSnet objects don't exist in the proto areas, and so are found via the default paths. However, if we were to create a symlink in the stub proto pointing at each non-OSnet dependency that we require, then the non-OSnet objects would also be found via the paths specified by -L, and not from the link-editor defaults. Given the above, we should not find any dependency objects from the link-editor defaults. Any dependency found via the link-editor defaults means that we have a Makefile error, and that we are linking to the build server inappropriately. All we need to make use of this fact is a linker option to produce a warning when it happens. Although warnings are nice, we in the OSnet have a zero tolerance policy for build noise. The -z fatal-warnings option that was recently introduced with -z guidance can be used to turn the warnings into fatal build errors, forcing the programmer to fix them. This was too easy to resist. I integrated 7021198 ld option to warn when link accesses a library via default path PSARC/2011/068 ld -z assert-deflib option into snv_161 (February 2011), shortly after the stub proto was introduced into ON. This putback introduced the -z assert-deflib option to the link-editor: -z assert-deflib=[libname] Enables warning messages for libraries specified with the -l command line option that are found by examining the default search paths provided by the link-editor. If a libname value is provided, the default library warning feature is enabled, and the specified library is added to a list of libraries for which no warnings will be issued. Multiple -z assert-deflib options can be specified in order to specify multiple libraries for which warnings should not be issued. The libname value should be the name of the library file, as found by the link-editor, without any path components. For example, the following enables default library warnings, and excludes the standard C library. ld ... -z assert-deflib=libc.so ... -z assert-deflib is a specialized option, primarily of interest in build environments where multiple objects with the same name exist and tight control over the library used is required. If is not intended for general use. Note that the definition of -z assert-deflib allows for exceptions to be specified as arguments to the option. In general, the idea of using a symlink from the stub proto is superior because it does not clutter up the link command with a long list of objects. When building the OSnet, we usually use the plain from of -z deflib, and make symlinks for the non-OSnet dependencies. The exception to this are dependencies supplied by the compiler itself, which are usually found at whatever arbitrary location the compiler happens to be installed at. To handle these special cases, the command line version works better. Following the integration of the link-editor change, I made use of -z assert-deflib in OSnet builds with 7021896 Prevent OSnet from accidentally linking to build system which integrated into snv_162 (March 2011). Turning on -z assert-deflib exposed between 10 and 20 existing errors in our Makefiles, which were all fixed in the same putback. The errors we found in our Makefiles underscore how difficult they can be prevent without an automatic system in place to catch them. Conclusions The stub proto is proving to be a generally useful construct for ON builds that goes beyond serving as a place to hold stub objects. Although invented to hold stub objects, it has already allowed us to simplify a number of previously difficult situations in our makefiles and builds. I expect that we'll find uses for it beyond those described here as we go forward.

    Read the article

< Previous Page | 725 726 727 728 729 730 731 732 733 734 735 736  | Next Page >