Search Results

Search found 7513 results on 301 pages for 'actual'.

Page 110/301 | < Previous Page | 106 107 108 109 110 111 112 113 114 115 116 117  | Next Page >

  • Designing javascript chart library

    - by coolscitist
    I started coding a chart library on top of d3js: My chart library. I read Javascript API reusability and Towards reusable charts. However, I am NOT really following the suggestions because I am not really convinced about them. This is how my library can be used to create a bubble chart: var chart = new XYBubbleChart(); chart.data = [{"xValue":200,"yValue":300},{"xValue":400,"yValue":200},{"xValue":100,"yValue":310}]; //set data chart.dataKey.x = "xValue"; chart.dataKey.y = "yValue"; chart.elementId = "#chart"; chart.createChart(); Here are my questions: It does not use chaining. Is it a big issue? Every property and function is exposed publicly. (Example: width, height are exposed in Chart.js). OOP is all about abstraction and hiding, but I don't really see the point right now. I think exposing everything gives flexibility to change property and functionality inside subclasses and objects without writing a lot of code. What could be pitfalls of such exposure? I have implemented functions like: zooming, "showing info boxes when data point is clicked" as "abilities". (example: XYZoomingAbility.js). Basically, such "abilities" accept "chart" object, play around with public variables of "chart" to add functionality. What this allows me to do is to add an ability by writing: activateZoomAbility(chartObject); My goal is to separate "visualization" from "interactivity". I want "interactivity" like: zooming to be plugged into the chart rather than built inside the chart. Like, I don't want my bubble chart to know anything about "zooming". However, I do want zoomable bubble chart. What is the best way to do this? How to test and what to test? I have written mixed tests: jasmine and actual html files so that I can test manually on browser.

    Read the article

  • Partner Webcast – Introducing Oracle Business Activity Monitoring - 18 October 2012

    - by Thanos
    Oracle Business Activity Monitoring (Oracle BAM), a component of both SOA Suite and BPM Suite, is a complete solution for building interactive, real-time dashboards and proactive alerts for monitoring business processes and services. Oracle BAM gives both business executives and operational manager’s timely information to make better business decisions.  A Real-time Business Visibility Solution that allows to monitor business services and processes in the enterprise, to correlate KPIs down to the actual business process themselves, and most important, to change business processes quickly or to take corrective action if the business environment changes. Let us show you how BAM provides a powerful insight, through Real-Time Dashboards, that can be a competitive edge for all your customers. Agenda: Oracle BAM Overview Business Problems New Approach with Oracle BAM 11g Demonstration Summary & Q&A Delivery Format This FREE online LIVE eSeminar will be delivered over the Web. Registrations received less than 24hours prior to start time may not receive confirmation to attend. Duration: 1 hour Register Now Send your questions and migration/upgrade requests [email protected] Visit regularly our ISV Migration Center blog or Follow us @oracleimc to learn more on Oracle Technologies, upcoming partner webcasts and events. All content is made available through our YouTube - SlideShare - Oracle Mix.

    Read the article

  • BizTalk Pipeline Component Error: "Object reference not set to an instance of an object"

    - by Stuart Brierley
    Yesterday I posted about my BizTalk Archiving Pipeline Component, which can be found on Codeplex if anyone is interested in taking a look. During testing of this component I began to encounter an error whereby the component would throw an "Object reference not set to an instance of an object" error when processing as a part of a Custom Pipeline. This was occurring when the component was reading a ReadOnlySeekableStream so that the data can be archived to file, but the actual code throwing the error was somewhere in the depths of the Microsoft.BizTalk.Streaming stack. It turns out that there is a known issue where this exception can be thrown because the garbage collector has disposed of of the stream before execution of the custom pipeline has completed. To get around this you need to add the streams in your code to the pipeline context resource tracker.   So a block of my code goes from:                         originalStrm = bodyPart.GetOriginalDataStream();                         if (!originalStrm.CanSeek)                         {                             ReadOnlySeekableStream seekableStream = new ReadOnlySeekableStream(originalStrm);                             inmsg.BodyPart.Data = seekableStream;                             originalStrm = inmsg.BodyPart.Data;                         }                         fileArchive = new FileStream(FullPath, FileMode.Create, FileAccess.Write);                         binWriter = new BinaryWriter(fileArchive);                         byte[] buffer = new byte[bufferSize];                         int sizeRead = 0;                         while ((sizeRead = originalStrm.Read(buffer, 0, bufferSize)) != 0)                         {                             binWriter.Write(buffer, 0, sizeRead);                         } to                         originalStrm = bodyPart.GetOriginalDataStream();                         if (!originalStrm.CanSeek)                         {                             ReadOnlySeekableStream seekableStream = new ReadOnlySeekableStream(originalStrm);                             inmsg.BodyPart.Data = seekableStream;                             originalStrm = inmsg.BodyPart.Data;                         }                         pc.ResourceTracker.AddResource(originalStrm);                         fileArchive = new FileStream(FullPath, FileMode.Create, FileAccess.Write);                         binWriter = new BinaryWriter(fileArchive);                         byte[] buffer = new byte[bufferSize];                         int sizeRead = 0;                         while ((sizeRead = originalStrm.Read(buffer, 0, bufferSize)) != 0)                         {                             binWriter.Write(buffer, 0, sizeRead);                         } So far this seems to have solved the issue, the error is no more, and my archive component is continuing its way through testing.

    Read the article

  • How do I load tmx files with Slick2d?

    - by mbreen
    I just started using Slick2D and learned how simple it is to load in a tilemap and display it. I tried atleast a dozen different tmx files from numerous examples to see if it was the actual file that was corrupted. Everytime I get this error: Exception in thread "main" java.lang.RuntimeException: Resource not found: data/maps/desert.tmx at org.newdawn.slick.util.ResourceLoader.getResourceAsStream(ResourceLoader.java:69) at org.newdawn.slick.tiled.TiledMap.<init>(TiledMap.java:101) at game.Game.init(Game.java:17) at game.Tunneler.initStatesList(Tunneler.java:37) at org.newdawn.slick.state.StateBasedGame.init(StateBasedGame.java:164) at org.newdawn.slick.AppGameContainer.setup(AppGameContainer.java:390) at org.newdawn.slick.AppGameContainer.start(AppGameContainer.java:314) at game.Tunneler.main(Tunneler.java:29) Here is my Game class: package game; import org.newdawn.slick.GameContainer; import org.newdawn.slick.Graphics; import org.newdawn.slick.SlickException; import org.newdawn.slick.state.BasicGameState; import org.newdawn.slick.state.StateBasedGame; import org.newdawn.slick.tiled.TiledMap; public class Game extends BasicGameState{ private int stateID = -1; private TiledMap map = null; public Game(int stateID){ this.stateID = stateID; } public void init(GameContainer container, StateBasedGame game) throws SlickException{ map = new TiledMap("data/maps/desert.tmx","maps");//ERROR } public void render(GameContainer container, StateBasedGame game, Graphics g) throws SlickException{ //map.render(0,0); } public void update(GameContainer container, StateBasedGame game, int delta) throws SlickException{ } public int getID(){return stateID;} } I've tried to see if anyone else has had similar problems but haven't turned up anything. I am able to load other files, so I don't believe it's a compiler issue. My menu class can load images and display them just fine. Also, the filepath is correct. Please let me know if you have any pointers that might help me sort this out.

    Read the article

  • Duck checker in Python: does one exist?

    - by elliot42
    Python uses duck-typing, rather than static type checking. But many of the same concerns ultimately apply: does an object have the desired methods and attributes? Do those attributes have valid, in-range values? Whether you're writing constraints in code, or writing test cases, or validating user input, or just debugging, inevitably somewhere you'll need to verify that an object is still in a proper state--that it still "looks like a duck" and "quacks like a duck." In statically typed languages you can simply declare "int x", and anytime you create or mutate x, it will always be a valid int. It seems feasible to decorate a Python object to ensure that it is valid under certain constraints, and that every time that object is mutated it is still valid under those constraints. Ideally there would be a simple declarative syntax to express "hasattr length and length is non-negative" (not in those words. Not unlike Rails validators, but less human-language and more programming-language). You could think of this as ad-hoc interface/type system, or you could think of it as an ever-present object-level unit test. Does such a library exist to declare and validate constraint/duck-checking on Python-objects? Is this an unreasonable tool to want? :) (Thanks!) Contrived example: rectangle = {'length': 5, 'width': 10} # We live in a fictional universe where multiplication is super expensive. # Therefore any time we multiply, we need to cache the results. def area(rect): if 'area' in rect: return rect['area'] rect['area'] = rect['length'] * rect['width'] return rect['area'] print area(rectangle) rectangle['length'] = 15 print area(rectangle) # compare expected vs. actual output! # imagine the same thing with object attributes rather than dictionary keys.

    Read the article

  • Teaching myself, as a physicist, to become a better programmer

    - by user787267
    I've always liked physics, and I've always liked coding, so when I got the offer for a PhD position doing numerical physics (details are not relevant, it's mostly parallel programming for a cluster) at a university, it was a no-brainer for me. However, as most physicists, I'm self taught. I don't have broad background knowledge about how to code in an object oriented way, or the name of that specific algorithm that optimizes the search in some kD tree. Since all my work so far has been more concerned about the physics and the scientific results, I undoubtedly have some bad habits - more so because my coding is my own, and not really teamwork. I have mostly used C since it is very straightforward and "what you write is what you get" - no need for fancy abstractions. However, I have recently switched to C++ since I'd like to learn more about the power that comes with abstraction, and it's pretty C-like (syntax-wise at least). How do I teach myself to code in a good, abstract way like a graduate in computer science? I know my code is efficient, but I want it to be elegant as well, and readable. Keep in mind that I don't have time to read several 1000-page tomes about abstract programming. I need to spend time on actual, physics related research (my supervisor would laugh at me if he knew I spent time thinking about how to program elegantly). How do I assess if my work is also good from a programmer's perspective?

    Read the article

  • Getting into driver development for linux [closed]

    - by user1103966
    Right now, I've been learning about writing device-drivers for linux 3.2 kernel for about 2 months. So far I have been able program simple char drivers that only read and write to a fictitious dev structure like a file, but now I'm moving to more advance concepts. The new material I've learned about includes I/O port manipulation, memory management, and interrupts. I feel that I have a basic understanding of overall driver operation but, there is still so much that I don't know. My question is this, given that I have the basic theory of how to write a dev-driver for a piece of hardware ... how long would it take to actually develop the skills of writing actual software that companies would want to employ? I plan on getting involved in an open-source project and building a portfolio. Also what type of beginner drivers could I write for hardware that would best help me develop my skills? I was thinking that taking on a project where I design my own key logger would easy and a good assignment to help me understand how IO ports and interrupts are used. I may want to eventually specialize in writing software for video cards or network devices though these devices seem beyond my understanding at the moment. Thanks for any help

    Read the article

  • How common is it to submit papers to journals or conferences outside of academia?

    - by Furry
    I worked in academia a few years, but more on the D-side of R&D. The race for papers never appealed to me and I'm a practical not theoretical type, but I do like reading papers on certain topics (e.g. Google Papers, NLP, FB papers, ...) a lot. How common is it that normally working developers submit papers to conferences or even journals? It seems to be somewhat common in certain companies (it's not common or encouraged in mine). Do journals or conferences even take papers by an academic nobody (BSc) under consideration? I ask, because I have a few rough ideas and I would just like to bring them into form, one way or the other. Bonus question: Is there a list of CS (in the widest sense) conferences/journals with short descriptions? PS (Four out of five researchers I met published quite some fluffy stuff for my taste. I am no expert, but those people told me sometimes themselves, that the implementation does not matter, just the idea and the presentation. I always wondered about that. I probably could write about ideas all day long (not instantly but with a bit of preparation), but the implementation and the practical part is the really hard part, that academia just does not like to be concerned with. Also many papers actually scream: I was written so the publication list of my author gets longer - which is a waste of time for everyone, and often a waste of tax money, too. When I think of CS-ish papers, I think of running implementations or actual data, like e.g. Google's Map Reduce, Serving Large-scale Batch Computed Data with Project Voldemort or the like.)

    Read the article

  • Implementing MVC pattern in SWT application

    - by Pradeep Simha
    I am developing an SWT application (it's basically an Eclipse plugin, so I need to use SWT). Currently my design is as follows: Model: In model, I have POJOs which represents the actual fields in views. View: It is a dumb layer, it contains just UI and contains no logic (not even event handlers) Controller: It acts as a mediator b/w those two layers. Also it is responsible for creating view layer, handling events etc. Basically I have created all of the controls in view as a static like this public static Button btnLogin and in controller I have a code like this: public void createLoginView(Composite comp) { LoginFormView.createView(comp); //This createView method is in view layer ie LoginFormView LoginFormView.btnLogin.addSelectionListener(new SelectionListener() { //Code goes here }); } Similalrly I have done for other views and controls. So that in main class and other classes I am calling just createLoginView of controller. I am doing similar thing for other views. So my question, is what I am doing is correct? Is this design good? Or I should have followed any other approach. Since I am new to SWT and Eclipse plugin development (basically I am Java EE developer having 4+ years of exp). Any tips/pointers would be appreciated.

    Read the article

  • Acer aspire one d270 can not set brightness

    - by Marko
    I hope you can help me figure out how to set the brightness at my netbook. Following problem appears since I installed ubuntu 11.10 on my acer: I am not able to adjust the brightness by FN Keys nor manually at "systemsettings-display". After searching with google for a while, I found a way via the terminal to adjust it with the folloqing command: "sudo setpci -s 00:02.0 f4.b=7f" ( from 00-9f). That was a major breakthrough for me as I am still new to Linux OS. But still seeking a way to get the FN keys for brightness to work, I kept searching until I found "askubuntu.com". I read through various Questions by other acer users and tried there solutions, but unfortunately none worked out for me. From this thread: fn + arrow keys don't adjust actual brightness on an Acer Aspire 5740 "sudo gedit /etc/X11/xorg.conf". This command did not work because the file was not found. I also used nano instead of gedit, but the file was empty( I think it just created the file since it did not exist). These commands which i found gave me a boot loop and I had to repair ubuntu: sudo gedit /etc/default/grub Change the line GRUB_CMDLINE_LINUX="" into GRUB_CMDLINE_LINUX="acpi_osi=Linux" sudo update-grub From this post Screen Brightness not adjustable for Acer Aspire S3: I tried the solution from the last post, but it did not work either. Does anyone know what I could try? I would appreciate it, if someone could help me out with this. Thanks in advance Netbook specs: CPU: Intel Atom N2600 Memory: 2gb DDR3 Storage: 320 GB HD GPU: Intel GMA 3600

    Read the article

  • Thumbs Up or Thumbs Down – Intel Debuts Prototype Palm-Reading Tech to Replace Passwords [Poll]

    - by Asian Angel
    This week Intel debuted prototype palm-reading tech that could serve as a replacement for our current password system. Our question for you today is do you think this is the right direction to go for better security or do you feel this is a mistake? Photo courtesy of Jane Rahman. Needless to say password security breaches have been a hot topic as of late, so perhaps a whole new security model is in order. It would definitely eliminate the need to remember a large volume of passwords along with circumventing the problem of poor password creation/selection. At the same time the new technology would still be in the ‘early stages’ of development and may not work as well as people would like. Long-term refinement would definitely improve its performance, but would it really be worth pursuing versus the actual benefits? From the blog post: Intel researcher Sridhar Iyendar demonstrated the technology at Intel’s Developer Forum this week. Waving a hand in front of a “palm vein” detector on a computer, one of Iyendar’s assistants was logged into Windows 7, was able to view his bank account, and then once he moved away the computer locked Windows and went into sleeping mode. How to Get Pro Features in Windows Home Versions with Third Party Tools HTG Explains: Is ReadyBoost Worth Using? HTG Explains: What The Windows Event Viewer Is and How You Can Use It

    Read the article

  • How best to handle ID3D11InputLayout in rendering code?

    - by JohnB
    I'm looking for an elegant way to handle input layouts in my directx11 code. The problem I have that I have an Effect class and a Element class. The effect class encapsulates shaders and similar settings, and the Element class contains something that can be drawn (3d model, lanscape etc) My drawing code sets the device shaders etc using the effect specified and then calls the draw function of the Element to draw the actual geometry contained in it. The problem is this - I need to create an D3D11InputLayout somewhere. This really belongs in the Element class as it's no business of the rest of the system how that element chooses to represent it's vertex layout. But in order to create the object the API requires the vertex shader bytecode for the vertex shader that will be used to draw the object. In directx9 it was easy, there was no dependency so my element could contain it's own input layout structures and set them without the effect being involved. But the Element shouldn't really have to know anything about the effect that it's being drawn with, that's just render settings, and the Element is there to provide geometry. So I don't really know where to store and how to select the InputLayout for each draw call. I mean, I've made something work but it seems very ugly. This makes me thing I've either missed something obvious, or else my design of having all the render settings in an Effect, the Geometry in an Element, and a 3rd party that draws it all is just flawed. Just wondering how anyone else handles their input layouts in directx11 in a elegant way?

    Read the article

  • Should we have a database independent SQL like query language in Django?

    - by Yugal Jindle
    Note : I know we have Django ORM already that keeps things database independent and converts to the database specific SQL queries. Once things starts getting complicated it is preferred to write raw SQL queries for better efficiency. When you write raw sql queries your code gets trapped with the database you are using. I also understand its important to use the full power of your database that can-not be achieved with the django orm alone. My Question : Until I use any database specific feature, why should one be trapped with the database. For instance : We have a query with multiple joins and we decided to write a raw sql query. Now, that makes my website postgres specific. Even when I have not used any postgres specific feature. I feel there should be some fake sql language which can translate to any database's sql query. Even Django's ORM can be built over it. So, that if you go out of ORM but not database specific - you can still remain database independent. I asked the same question to Jacob Kaplan Moss (In person) : He advised me to stay with the database that I like and endure its whole power, to which I agree. But my point was not that we should be database independent. My point is we should be database independent until we use a database specific feature. Please explain, why should be there a fake sql layer over the actual sql ?

    Read the article

  • Practical considerations for HTML / CSS naming conventions (syntax)

    - by Jeroen
    Question: what are the practical considerations for the syntax in class and id values? Note that I'm not asking about the semantics, i.e. the actual words that are being used, as for example described in this blogpost. There are a lot of resources on that side of naming conventions already, in fact obscuring my search for practical information on the various syntactical bits: casing, use of interpunction (specifically the - dash), specific characters to use or avoid, etc. To sum up the reasons I'm asking this question: The naming restrictions on id and class don't naturally lead to any conventions The abundance of resources on the semantic side of naming conventions obscure searches on the syntactic considerations I couldn't find any authorative source on this There wasn't any question on SE Programmers yet on this topic :) Some of the conventions I've considered using: UpperCamelCase, mainly as a cross-over habit from server side coding lowerCamelCase, for consistency with JavaScript naming conventions css-style-classes, which is consistent with naming of css properties (but can be annoying when Ctrl+Shift+ArrowKey selection of text) with_under_scores, which I personally haven't seen used much alllowercase, simple to remember but can be hard to read for longer names UPPERCASEFTW, as a great way to annoy your fellow programmers (perhaps combined with option 4 for readability) And probably I've left out some important options or combinations as well. So: what considerations are there for naming conventions, and to which convention do they lead?

    Read the article

  • How should a developer reject impossible requirements?

    - by sugar
    Here's the problem I'm facing: Quote From Project Manager: Hey Sugar, I'm assigning you the task of developing a framework that could be used for many different iOS applications. Here are the requirements: It should be able to detect the thickness of the thumb or fingers being used to manipulate the UI. With this information, all elements of the UI should be arranged & sized automatically. For a larger thumb, elements should be arranged nearer the center of the screen. For a smaller thumb, elements should be arranged nearer the corners of the screen. For a larger thumb, all fonts should be smaller. (We're assuming an adult in this case.) For a smaller thumb, all fonts should be larger. (We're assuming a younger person in this case.) Summary: This framework is required for creating user-friendly user interfaces programmatically. The framework should be developed in such a way that we can use for as many projects as needed, so it must also be very developer-friendly. I am the developer given this task, so my questions are as follows: How can I explain that these requirements are a little ridiculous? How can I explain that it would be better to concentrate on developing actual projects? How can I explain that even if this were possible, I wouldn't recommended developing such a thing? How do I say NO to this project politely, gently, and respectfully? How can I explain that even for a developer with 3 years of experience, this might not be possible?

    Read the article

  • Rotating a view of a chunked 2d tilemap

    - by Danie Clawson
    I'm working on a top-down (oblique) tile-based engine. I would like for the tiles to have a definable height in the world, with Characters being occluded by them, etc. This has led to a desire to be able to "rotate" the view of the world, even though I'm using all hand-drawn graphics and blitting. Therefor, I need to rotate the actual world itself, or change how the Camera traverses these arrays. How can, or should, I create individual rotations of 90 degrees, when I have multi-dimensional arrays? Is it faster to actually rotate the array, to access it differently, or to create pre-computed accessor(?) arrays, something like how my chunks work? How can I rotate an individual chunk, or set of chunks? Currently I establish my tile grid like this (tile height not included): function Surface(WIDTH, HEIGHT) { WIDTH = Math.max(WIDTH-(WIDTH%TPC), TPC); HEIGHT = Math.max(HEIGHT-(HEIGHT%TPC), TPC); this.tiles = []; this.chunks = []; //Establish tiles for(var x = 0; x < WIDTH; x++) { var col = [], ch_x = Math.floor(x/TPC); if(!this.chunks[ch_x]) this.chunks.push([]); for(var y = 0; y < HEIGHT; y++) { var tile = new Tile(x, y), ch_y = Math.floor(y/TPC); if(!this.chunks[ch_x][ch_y]) this.chunks[ch_x].push([]); this.chunks[ch_x][ch_y].push(tile); col.push(tile); } this.tiles.push(col); } }; Even some basic advice on my data struct would be much appreciated.

    Read the article

  • Distinction between API and frontend-backend

    - by Jason
    I'm trying to write a "standard" business web site. By "standard", I mean this site runs the usual HTML5, CSS and Javascript for the front-end, a back-end (to process stuff), and runs MySQL for the database. It's a basic CRUD site: the front-end just makes pretty whatever the database has in store; the backend writes to the database whatever the user enters and does some processing. Just like most sites out there. In creating my Github repositories to begin coding, I've realized I don't understand the distinction between the front-end back-end, and the API. Another way of phrasing my question is: where does the API come into this picture? I'm going to list some more details and then questions I have - hopefully this gives you guys a better idea of what my actual question is, because I'm so confused that I don't know the specific question to ask. Some more details: I'd like to try the Model-View-Controller pattern. I don't know if this changes the question/answer. The API will be RESTful I'd like my back-end to use my own API instead of allowing the back-end to cheat and call special queries. I think this style is more consistent. My questions: Does the front-end call the back-end which calls the API? Or does the front-end just call the API instead of calling the back-end? Does the back-end just execute an API and the API returns control to the back-end (where the back-end acts as the ultimate controller, delegating tasks)? Long and detailed answers explaining the role of the API alongside the front-end back-end are encouraged. If the answer depends on the model of programming (models other than the Model-View-Controller pattern), please describe these other ways of thinking of the API. Thanks. I'm very confused.

    Read the article

  • Loading a new instance of a class through XML not working quite right

    - by Thegluestickman
    I'm having trouble with XML and XNA. I want to be able to load weapon settings through XML to make my weapons easier to make and to have less code in the actual project file. So I started out making a basic XML document, something to just assign variables with. But no matter what I changed it gave me a new error every time. The code below gives me a "XML element 'Tag' not found", I added and it started to say the variables weren't found. What I wanted to do in the XML file as well, was load a texture for the file too. So I created a static class to hold my texture values, then in the Texture tag of my XML document I would set it to that instance too. I think that's were the problems are occuring because that's where the "XML element 'Tag' not found" error is pointing me too. My XML document: <XnaContent> <Asset Type="ConversationEngine.Weapon"> <weaponStrength>0</weaponStrength> <damageModifiers>0</damageModifiers> <speed>0</speed> <magicDefense>0</magicDefense> <description>0</description> <identifier>0</identifier> <weaponTexture>LoadWeaponTextures.ironSword</weaponTexture> </Asset> </XnaContent> My Class to load the weapon XML: public static class LoadWeaponXML { static Weapon Weapons; public static Weapon WeaponLoad(ContentManager content, int id) { Weapons = content.Load<Weapon>(@"Weapons/" + id); return Weapons; } } public static class LoadWeaponTextures { public static Texture2D ironSword; public static void TextureLoad(ContentManager content) { ironSword = content.Load<Texture2D>("Sword"); } } I'm not entirely sure if you can load textures through XML, but any help would be greatly appreciated.

    Read the article

  • Does a team of developers need a manager?

    - by Amadiere
    Background: I'm currently part of a team of four: 1 manager, 1 senior developer and 2 developers. We do a range of bespoke in-house systems / projects (e.g. 6-8 weeks) for an organisation of around 3500 staff, as well as all the maintenance and support required from the systems that have been created before. There is not enough of us to do all the work that is potentially coming our way - we're understaffed. Management acknowledge this, but budget restraints limit our ability to recruit additional members to the team (even if we make the salary back in savings). The Change This leaves us where we are now. Our manager is due to leave his role for pastures new, leaving a vacancy in the team. Management are using this opportunity to restructure our team which would see the team manager role replaced by another developer and another senior developer. Their logic being that we need more developers, so here's a way of funding it (one of the roles is partially funded from another vacant post). The team would have no direct line manager and the roles and responsibilities would be divided up between the seniors and the (relatively new to post) service manager (a non-technical role with little-to-no development knowledge/experience whose focus is shared amongst a number of other teams and individuals) - who would be our next actual manager up the food chain. I guess the final question is: Is it possible to run a development team without an manager? Have you had experience of this? And what things could go wrong / could be of benefit to us? I'd ideally like to "see the light" and the benefits of doing things this way, or come up with some points for argument against it.

    Read the article

  • Cloning (mirroring) laptop display to area of external monitor display

    - by intuited
    I'm using Maverick "10.10" Meercat on a HP Pavilion tx2110. This machine has an NVidia Go6150 graphics card, and sports a 1280x800 display. I have an external monitor which can do 1280x1024 resolution. FWIW I'm using openbox as my window manager; as I understand it this shouldn't be a factor. I'd like to clone the display to the monitor, so that the size of the desktop remains at 1280x800, and there is a horizontal blank area on the external monitor. I.E. I want to avoid having to pan the display of the desktop on either monitor. So the actual resolution of the monitor would be 1280x1024, but the resolution of the section of the monitor where stuff was actually being displayed would be 1280x800. Using the nvidia-settings applet, I'm able to set up the cloned display so that the desktop size is 1280x1024 (the resolution of the external monitor), but can't find a way to instead have the desktop size stay at the resolution of the laptop's built-in display. Is this achievable? Ideally I'd like the external monitor's blank area to be at the top of the screen, i.e. for it to align the display with the bottom of the screen.

    Read the article

  • Windows Phone 7 Prototype 002: Animated Page Transitions + Writeable Bitmaps

    Motion is a key part of WP7 application development. Without motion, the WP7 UI is just a bunch of text. Not nearly as exciting. To delight users, you can add some transitions between pages.  The sample app includes some storyboards to animate between two pages. Other people have noted that you can just use the transitioning content control form the SIlverlight toolkit. Peter Torr also had a nice animating frame control in his mix demo code (his blog has some other great code samples for WP7 app dev). I took some of those concepts and the code from the TransitioningContentControl to make a new animating frame control. In this prototype, the frame takes a snapshot of the old content and the new content using writeable bitmaps and animates the snapshots and then replaces those with the actual page. The benefit is smoother animation on pages with lots of controls. Otherwise, if you have a large panorama, it might not animate that cleanly.  Like the other solutions based on the TransitioningContentControl, you can centralize all the animations in one place and not have to handle them on each individual page. Peters code also had a nice snippet for choosing the animation based on the navigation direction so you could just have a forward / backward animation and not have to do anything on each page. You could also probably add some more advanced transitions using pixel shaders or make an default no transition state if you wanted to have some specific animation on a page where individual  controls transitioned out differently like some of the WP7 shell apps. Sample Code 100% guaranteed to work on my emulatorDid you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • What's so difficult about SVN merges? [closed]

    - by Mason Wheeler
    Possible Duplicate: I’m a Subversion geek, why should I consider or not consider Mercurial or Git or any other DVCS? Every once in a while, you hear someone saying that distributed version control (Git, HG) is inherently better than centralized version control (like SVN) because merging is difficult and painful in SVN. The thing is, I've never had any trouble with merging in SVN, and since you only ever hear that claim being made by DVCS advocates, and not by actual SVN users, it tends to remind me of those obnoxious commercials on TV where they try to sell you something you don't need by having bumbling actors pretend that the thing you already have and works just fine is incredibly difficult to use. And the use case that's invariably brought up is re-merging a branch, which again reminds me of those strawman product advertisements; if you know what you're doing, you shouldn't (and shouldn't ever have to) re-merge a branch in the first place. (Of course it's difficult to do when you're doing something fundamentally wrong and silly!) So, discounting the ridiculous strawman use case, what is there in SVN merging that is inherently more difficult than merging in a DVCS system?

    Read the article

  • How to protect Google Ads from yontoo layers runtime?

    - by Dharmavir
    Since sometime I have observed that Google Ads on any website including my blog (http://blogs.digitss.com) gets replaced with something similar to uploaded image below. I am sure it's happening with many people and that could reduce google adsense income. After some research I found that it is because "yontoo layers runtime" from http://www.yontoo.com/ (tagline says: Platform that allows you to control the websites you visit everyday.) but actually they are taking over. I am not sure with which software they are making a way into users computer but that seems very bad in terms of freedom of Internet and advt/marketing industry. I don't remember I have ever said "yes" to install yontoo on my computer. This piece of software is successful to install itself on my laptop/desktop and workstation at office. I am going to disable it now but the question is how do I make my websites aware of Yontoo Runtime and stop them from replacing Google Ads? Basically they are not able to replace all adsense ads but so far they are successfully replaced 1st instance of adsense advt and I am sure in future they will hit more. There could be 2 approaches 1) Fool yontoo runtime by putting some misleading divs in html document to save actual ads, 2) Completely disable yontoo by working out some client side script (javascript) which can fail/crash yontoo runtime and so will fail it's purpose of replacing ads. You can visit my blog (http://blogs.digitss.com) and see on top-right corner, if you find that google ad replaced with something similar to image attached with question - it means your computer/browser is infected too. Looking forward to reply from webmasters, if someone has already wrote some code/plugin to make website (and google ads) safe from yontoo or similar runtime. FYI: it was able to push this runtime in all browsers installed on machine. So a dangerous threat. And yes, I am just using Google ads - not sure if yontoo runtime is doing trick against other ad networks or not? I am sure they must be doing it with some handful of ad networks.

    Read the article

  • Developing a feature which sole purpose to be taken out?

    - by adib
    What is the name of the pattern in which individual contributors (programmers/designers) developed an artifact for the sole purpose is to serve as a diversion so that management can remove that feature in the final product? This is a folklore I heard from an ex-colleague who used to work at a large game development company. At that company, it is well known that middle management is pressurized to "give inputs" and "make changes" to the product otherwise they risk being seen as not contributing to the project. This situation have delayed many projects because of these superfluous "management inputs". In one project at the above company, the artists and developers created a supernumerary animated character that appears in every cutscene and sticks out like a sore thumb. They designed it in such a way that it can be easily removed before the game is shipped (this was when games were still sold in physical media and not a downloadable product). Obviously the management then voted to remove the animation. On the positive side, management didn't introduced any unnecessary changes that would have delayed the project because they have shown that they provided constructive inputs to the product. This process pattern has a name among game programmers that work in corporates, but I forgot what was the actual name. I believe it's duck-something. Anybody can help pointing out the name and perhaps some rather credible reference to how the pattern develops?.

    Read the article

  • Burned CD-R are not identical to the input iso image, why?

    - by Grumbel
    I have the issue that sometimes when I burn an iso image to a CD-R with: sudo wodim -v driveropts=burnfree -data dev=/dev/scd0 input.iso And then read it back out again with: sudo dd if=/dev/cdrom of=output.iso dd: reading `/dev/cdrom': Input/output error ... That I end up with two iso images that are not identical, namely the output.iso is missing 2048 bytes at the end. When I however mount the iso image or CD-R and compare the actual files on the mountpoint, both are identical. Is that expected behavior or is that an actually incorrect burn of the data? And if its expected, how can I verify that the burn process was successful? The reason why I ask in the first place is that it seems to be reproducible behavior, certain iso images come out 2048 bytes short, even on repeated burns, but all burned CD-Rs are under themselves identical. Also what is the reason behind the: dd: reading `/dev/cdrom': Input/output error As it happens always, I assume it is normal, but what is the technical reason behind it? I assume CDs don't allow the device to detect the size directly, so dd reads till it encounters the end the hard way. Edit: User karol on superusers.com mentioned that both the size issue and the read error are the result of using -tao (default) in wodim instead of -dao mode. I couldn't yet test it, but it sounds like the most plausible explanation so far.

    Read the article

< Previous Page | 106 107 108 109 110 111 112 113 114 115 116 117  | Next Page >