Search Results

Search found 12397 results on 496 pages for 'maybe'.

Page 209/496 | < Previous Page | 205 206 207 208 209 210 211 212 213 214 215 216  | Next Page >

  • Your personal backlog

    - by johndoucette
    Whenever I start a new project or come in during a hectic time to help salvage a deliverable – there is always a backlog. Generating the backlog can be a daunting exercise, but worth the effort. Once I have a backlog, I feel in control and the chaos begins to quell. In your everyday life, you too should keep a backlog. Here is how I do it; 1. Always carry a notebook 2. Start each day marking a new page with today’s date 3. Flip to yesterday’s notes and copy every task with an empty checkbox next to it, to the new empty page (today) 4. As the day progresses and you go to meetings, do your work, or get interrupted to do something…jot it down in today’s page and put an empty checkbox next to it. If you get it done during the day, awesome. Mark it complete. Keep carrying and writing every task to each new day until it is complete. Maybe one day, you will have an empty backlog and your sprint will be complete!

    Read the article

  • Windows Defender Update KB915597 (Definition 1.135.415.0)? Killed My Live Discs

    - by user88311
    Here's my problem, for those willing to read for about 2 minutes here's the entire story, http://answers.microsoft.com/en-us/windows/forum/windows_vista-windows_update/bsod-after-windows-defender-update-kb915597/a4b5fca3-0274-47b4-97c4-61b34c4c4599, for those who want the short version here's what happened. After windows update automatically updated windows defender to kb915597, my computer starting getting bsods on shut down and start up and started experiencing problems withe the usb ports. So I decided to go to the microsoft answers site for help, I know that was probably my first mistake, so I followed their advice and they turned my computer into a large paper weight. Luckily I make physical backups of my c drive every few months and I have one from back in july, so I figured I'd boot up a ubuntu live disc, copy all my files from the past 2 months to a external drive and just copy the backup back to the c drive, that's where I ran into this problem. When I put in either a ubuntu or kubuntu disc, everything goes well, until it finishes the loading bar then when the OS would presumably start up, the computer resets, I've tried with ubuntu, kubuntu and gparted, and only gparted is able to get to the point where it starts up, but even then when I try to access the internet from it, the computer resets, and when I tried to copy the entire C drive partition to a blank external I wasn't able to. So I figured somehow maybe the C drive had something to do with it, so I unplugged the C drive so my computer was just a 2.8 ghz processor and 2 gigs of ram, should have had no problem starting a live disc but the problem still continues. After doing some googling around I've found whenever windows gets a update with the title KB915597 it's pretty much the kill switch for windows, I've tried contacting microsoft tech support and even managed to directly contact a software engineer but as soon as I mention KB915597 they all just blew me off. I hope anybody who reads this has any idea how to fix this, I'm going to attempt to install ubuntu or kubuntu to a external drive using the same computer and see what happens now.

    Read the article

  • BizTalk: Dynamic SMTP Port: Unknown Error Description

    - by Leonid Ganeline
    Today I investigated one strange error working with Dynamic SMTP Port.   Event Type: Error Event Source: BizTalk Server 2006 Event Category: BizTalk Server 2006 Event ID: 5754 Date: ******** Time: ********AM User: N/A Computer: ******** Description: A message sent to adapter "SMTP" on send port "*********" with URI "mailto:********.com" is suspended. Error details: Unknown Error Description  MessageId:  {********} InstanceID: {********}   My code was pretty simple and the source of the error was hidden somewhere inside it.   msg_MyMessage(SMTP.CC) = var_CC; msg_MyMessage(SMTP.From) = var_From; msg_MyMessage(SMTP.Subject) = var_Subject; msg_MyMessage(SMTP.EmailBodyText) = var_Message;    // #1    msg_MyMessage(SMTP.SMTPHost) = " localhost "; msg_MyMessage(SMTP.SMTPAuthenticate) = 0; When I added line #2, this frustrating error disappeared.    msg_MyMessage(SMTP.EmailBodyTextCharset) = "UTF-8"; // #2 Conclusion: If we use the SMTP.EmailBodyText property, we must set up the SMTP.EmailBodyTextCharset property. To me it looks like a bug in BizTalk. [Maybe it is "by design", but in this case give us a useful error text!!!] And don't ask me how much time I've spent with this investigation.

    Read the article

  • How can I separate the user interface from the business logic while still maintaining efficiency?

    - by Uri
    Let's say that I want to show a form that represents 10 different objects on a combobox. For example, I want the user to pick one hamburguer from 10 different ones that contain tomatoes. Since I want to separate UI and logic, I'd have to pass the form a string representation of the hamburguers in order to display them on the combobox. Otherwise, the UI would have to dig into the objects fields. Then the user would pick a hamburguer from the combobox, and submit it back to the controller. Now the controller would have to find again said hamburguer based on the string representation used by the form (maybe an ID?). Isn't that incredibly inefficient? You already had the objects you wanted to pick one from. If you submited to the form the whole objects, and then returned a specific object, you wouldn't have to refind it later on since the form already returned a reference to that object. Moreover, if I'm wrong and you actually should send the whole object to the form, how can I isolate UI from logic?

    Read the article

  • ArchBeat Link-o-Rama for 2012-03-21

    - by Bob Rhubart
    Webcast: Simplify Oracle RAC Deployment with Oracle VM event.on24.com Tuesday March 20, 2012 - 9am PT / Noon ET Learn how you can: Deploy an Oracle (RAC) Database environment in minutes with Oracle VM templates Create, deploy or convert existing systems into highly available cluster environments Instantly respond to changing demand by relocating resources between servers Speakers: Ronen Kofman – Product Management Director, Oracle Markus Michalewicz – Senior Principal Product Manager, Oracle Webcast: Oracle Business Intelligence Mobile event.on24.com Event Date: Wednesday, March 28, 2012 Time: 10 a.m. PT / 1 p.m. ET Speakers: Pete Manhardt – Director Enterprise Information at Smiths Group, plc Shailesh Shedge – Director BI & Analytics Practice at Ascentt Manan Goel – Director BI Product Marketing at Oracle Seth's Blog: The extraordinary software development manager sethgodin.typepad.com "Being good at programming is insufficient qualification for becoming a world class software project manager/leader," says marketing guru Seth Godin. Mismatch: Developer skills and customer demands | Floyd Teter orclville.blogspot.com "Those of us in the developer community may need to reconsider the law of supply and demand," says Oracle ACE Director Floyd Teter, "and get on with the process of matching our skills to the demands of our customers." SOA gets mobilized; mobile gets SOA-ized: survey | Joe McKendrick www.zdnet.com "Maybe mobile is the killer app for SOA that actually will convince people to adopt the architectural style." Integrating with Oracle Fusion Applications: Discovering Integration Artifacts | Rajesh Raheja rraheja.wordpress.com Rajesh Raheja briefly discusses "the ease with which integrations are now possible using standards-based technologies with enterprise applications." Chargeback and showChargeback and showback...both a 'throw back' | Tom Laszewski blogs.oracle.com Tom Laszeski discusses strategies for tracking and applying the costs of "IT services, hardware or software to the business unit in which they are used." GlassFish 4.0 Virtualization Progress - VirtualBox | The Aquarium blogs.oracle.com Want to spawn GlassFish instances as VirtualBox virtual machines? The Aquarium shares resources that will help you get it done. Thought for the Day "Spring is the time of plans and projects." — Leo Tolstoy

    Read the article

  • How do search engines handle hyphenated words?

    - by NinjaKC
    I am not sure my title fully explains what I mean. I thought this might be an interesting question. If I had a set of keywords, broken with a dash or 2, will search engines consider the dashed split keyword as maybe a full keyword? Say I have a site that sort of breaks words down, like the dictionary sites do. So a keyword for that page, might end up in the page, and / or the URL, as broken by dashes. Key-word = keyword Co-op-er-at-ive = cooperative Pho-to-gra-phy = Photography www.example.com/key-word/ www.example.com/co-op-er-at-ive/ www.example.com/pho-to-gra-phy/ I know search engines will consider a dash (at least Google) as a space, and understand it as multiple words. But in the English language, a dash can also break a word down (at least I think it can, can't it?), so will search engines also take this into consideration? I did a 'little' research, I Googled some words and placed random dashes, and it returned the words I searched for, but this could be considered a typo from the user on Google's search end, so really I am wondering if I can purposely put a dash in a keyword, and have the search engine spiders still catch that keyword as the real word without dashes? I've done a little Googling and looking here on Stackoverflow, but everything comes down to dashes for multiple words, not really the specific thing I'm trying to figure out. Hopefully that makes sense, I am not an expert in SEO, yet, but get the basics and have been playing, and this is just really a random question to satisfy my knowledge of playing :P

    Read the article

  • Need help deciding if Joomla! experience as a good metric for hiring a particular prospective employee.

    - by Stephen
    My company has been looking to hire a PHP developer. Some of the requirements for the job include: an understanding of design patterns, particularly MVC. some knowledge of PHP 5.3's new features. experience working with a PHP framework (it doesn't matter which one). I interviewed a man today who's primary work experience involved working with Joomla!. As an employee, he will be required to work on existing and new web applications that use Zend Framework, CakePHP and/or CodeIgniter. It is my opinion that we shouldn't dismiss hiring a developer just because he has not used the same technologies that he'll be using on the job. So, I'd like to know about the kind of coding experience working with Joomla! can provide. I've never bothered to take more than a brief look (if that) at the Joomla! package, so I'm hoping to lean on the knowledge of my peers. Would you consider Joomla! to contain a professional code-base? Is the package well organized, and/or OO in general, or is it more like WordPress where logic and presentation are commingled? When working with Joomla!, is the developer encouraged to use best practices? In your opinion, would experience working with Joomla! garner the skills needed to get up to speed with Zend or CakePHP quickly, or will there be a steep learning curve ahead of the developer? I'm not saying that Joomla! is a bad technology, or even that it is lower on the totem pole when compared to the frameworks I've mentioned. Maybe it's awesome, I dunno. I simply have no idea!

    Read the article

  • How to translate formulas into form of natural language?

    - by Ricky
    I am recently working on a project aiming at evaluating whether an android app crashes or not. The evaluation process is 1.Collect the logs(which record the execution process of an app). 2.Generate formulas to predict the result (formulas is generated by GP) 3.Evaluate the logs by formulas Now I can produce formulas, but for convenience for users, I want to translate formulas into form of natural language and tell users why crash happened.(I think it looks like "inverse natural language processing".) To explain the idea more clearly, imagine you got a formula like this: 155 - count(onKeyDown) >= 148 It's obvious that if count(onKeyDown) 7, the result of "155 - count(onKeyDown) = 148" is false, so the log contains more than 7 onKeyDown event would be predicted "Failed". I want to show users that if onKeyDown event appears more than 7 times(155-148=7), this app will crash. However, the real formula is much more complicated, such as: (< !( ( SUM( {Att[17]}, Event[5]) <= MAX( {Att[7]}, Att[0] >= Att[11]) OR SUM( {Att[17]}, Event[5]) > MIN( {Att[12]}, 734 > Att[19]) ) OR count(Event[5]) != 1 ) > (< count(Att[4] = Att[3]) >= count(702 != Att[8]) + 348 / SUM( {Att[13]}, 641 < Att[12]) mod 587 - SUM( {Att[13]}, Att[10] < Att[15]) mod MAX( {Att[13]}, Event[2]) + 384 > count(Event[10]) != 1)) I tried to implement this function by C++, but it's quite difficult, here's the snippet of code I am working right now. Does anyone knows how to implement this function quickly?(maybe by some tools or research findings?)Any idea is welcomed: ) Thanks in advance.

    Read the article

  • What's the relationship between meta-circular interpreters, virtual machines and increased performance?

    - by Gomi
    I've read about meta-circular interpreters on the web (including SICP) and I've looked into the code of some implementations (such as PyPy and Narcissus). I've read quite a bit about two languages which made great use of metacircular evaluation, Lisp and Smalltalk. As far as I understood Lisp was the first self-hosting compiler and Smalltalk had the first "true" JIT implementation. One thing I've not fully understood is how can those interpreters/compilers achieve so good performance or, in other words, why is PyPy faster than CPython? Is it because of reflection? And also, my Smalltalk research led me to believe that there's a relationship between JIT, virtual machines and reflection. Virtual Machines such as the JVM and CLR allow a great deal of type introspection and I believe they make great use it in Just-in-Time (and AOT, I suppose?) compilation. But as far as I know, Virtual Machines are kind of like CPUs, in that they have a basic instruction set. Are Virtual Machines efficient because they include type and reference information, which would allow language-agnostic reflection? I ask this because many both interpreted and compiled languages are now using bytecode as a target (LLVM, Parrot, YARV, CPython) and traditional VMs like JVM and CLR have gained incredible boosts in performance. I've been told that it's about JIT, but as far as I know JIT is nothing new since Smalltalk and Sun's own Self have been doing it before Java. I don't remember VMs performing particularly well in the past, there weren't many non-academic ones outside of JVM and .NET and their performance was definitely not as good as it is now (I wish I could source this claim but I speak from personal experience). Then all of a sudden, in the late 2000s something changed and a lot of VMs started to pop up even for established languages, and with very good performance. Was something discovered about the JIT implementation that allowed pretty much every modern VM to skyrocket in performance? A paper or a book maybe?

    Read the article

  • Allow JMX connection on JVM 1.6.x

    - by Martin Müller
    While trying to monitor a JVM on a remote system using visualvm the activation of JMX gave me some challenges. Dr Google and my employers documentation quickly revealed some -D opts needed for JMX, but strangely it only worked for a Solaris 10 system (my setup: MacOS laptop monitoring SPARC Solaris based JVMs) On S11 with the same opts I saw that "my" JVM listening on port 3000 (which I chose for JMX), but visualvm was not able to get a connection. Finally I found out that at least my S11 installation needed an explicit setting of the RMI host name. This what finally worked:         -Dcom.sun.management.jmxremote=true \        -Dcom.sun.management.jmxremote.ssl=false \        -Dcom.sun.management.jmxremote.authenticate=false \        -Dcom.sun.management.jmxremote.port=3000 \        -Djava.rmi.server.hostname=s11name.us.oracle.com \ Maybe this post saves someone else the time I spent on research 

    Read the article

  • Save .mov file with applescript

    - by Frost Shadow
    I've installed the Perian addon for Quicktime so it can open .flv files, and then I can save them as .m4v or .mov. I'm trying to make an Applescript to convert from .flv to .m4v automatically by using this tutorial and butchering their example applescript file, which normally converts ChemDraw files (.cdx, .cml, .mol) to .tiff, so that it instead uses Quicktime to save the .flv files as .m4v. When I try to use it, though, I get an error "QuickTime Player got an error: document 1 doesn't understand the save message". My save message is currently: save first document in target_path as ".m4v" which looks like the QuickTime dictionary's instructions: save specifier : The document(s) or window(s) to save. [as saveable file format] : The file format to use. I've also tried "m4v", without the period, and still get the error. Is my Save direction wrong, or is it probably an error from trying to use Quicktime instead of the original ChemDraw? I tried to change references to .cdx, .cml, .mol, .tiff, and ChemDraw to .flv, .m4v, and QuickTime respectively, but maybe it's more complicated than that? I would, in fact, appreciate any example showing how to save an application file (ex: a TextEdit .rtf or .txt), as I can't seem to get any kind of file to save using applescript.

    Read the article

  • SharePoint 2010 and Windows Server Backup

    - by Enrique Lima
    A couple of months ago, a friend found a bit of information on TechNet that has proven to be quite useful. See, I am of the opinion SharePoint allows for smaller deployments to be made, and with that said, I am talking about SharePoint Foundation 2010 being used for the most part. But truly the point here is not to discuss whether or not a deployment of SharePoint Foundation 2010 or SharePoint Server 2010 is right or not.  The fact is they do take place and happen.  And information will reside there. Now, the point of this post is to raise awareness on options available for companies that have implemented it and maybe are a bit “iffy” on how to protect the information being placed in libraries and lists.  In many cases I have found SharePoint comes first and business continuity becomes an afterthought.  The documentation piece from TechNet states: “You can register SharePoint Server 2010 with Windows Server Backup by using the stsadm.exe -o -registerwsswriter operation to configure the Volume Shadow Copy Service (VSS) writer for SharePoint Server. Windows Server Backup then includes SharePoint Server 2010 in server-wide backups. When you restore from a Windows Server backup, you can select Microsoft SharePoint Foundation (no matter which version of SharePoint 2010 Products is installed), and all components reported by the VSS writer forSharePoint Server 2010 on that server at the time of the backup will be restored. Windows Server Backup is recommended only for use with for single-server deployments.” Even in the event of single-server deployments you will have options to safeguard your data. The process will require that after you have executed the stsadm command above, you will then use Windows Server Backup to do a Full Server Backup.  Then when the restore operation is needed you will be able to select specifically the section that has the SharePoint technologies backup. The restore process: Hope you find this to be a helpful post.  I have found this to be specially handy in SharePoint deployments that are part of a Team Foundation Server deployment and that are isolated from any other SharePoint farm and such.   Credits:  Sean McDonough for passing along the information available on TechNet.

    Read the article

  • How to sync Ubuntu/software/configurations between N computers with free software and/or without a cloud?

    - by skanatek
    Note: this question is not about syncing data in a Dropbox-like way (files, folders), it is more about syncing configurations. I would like to have exactly the same version of Ubuntu with all the software installed and configured both on my Desktop PC and on my Laptop PC (and maybe on my small netbook PC) without using Ubuntu Sync and with minimal maintenance effort (setup once, run for a long time). The use case is the following: I work on my Laptop PC and do some changes to software configuration, for example: configure vim to have a new plugin update the Search Tracker / Recoll file search index configure Thunderbird to have an additional IMAP account ('remember password') add some new bookmarks in Firefox/Chrome change the desktop background image install new software with apt-get install build and install new software with checkinstall etc. I do some 'sync' operation I switch to my Desktop PC and get all the changes from (1) working on the Desktop PC I work on my Desktop PC and do some changes to software configuration, for example: add new directory to the list of directories to be backed up by DejaDup add a new check spelling dictionary to the Libreoffice Writer configure the Terminator software to have colored fonts install new font into the Ubuntu system configure Ekiga to make phone calls etc. I do some 'sync' operation I switch to my Laptop PC and get all the changes from (1) and (4) working on the Laptop PC. Question: What free/open-source software can I use to sync both machines' Ubuntu systems, installed software and configurations? Is it possible to do that without any cloud services? Complementary question: It is obvious that the Desktop PC and the Laptop PC have different hardware configurations. How does the 'sync software' in question deal with video drivers, wlan drivers and their configurations? Note: I do not need all the PCs to be synced at the same time, because I work with only one single machine at once. Note: I considered to use Chef to solve the problem, but it seems that it might be really cumbersome to maintain such a setup. Note: I also considered using a bootable USB with Ubuntu installed (portable Linux), but I am not sure that the video drivers will work then.

    Read the article

  • Doing powerups in a component-based system

    - by deft_code
    I'm just starting really getting my head around component based design. I don't know what the "right" way to do this is. Here's the scenario. The player can equip a shield. The the shield is drawn as bubble around the player, it has a separate collision shape, and reduces the damage the player receives from area effects. How is such a shield architected in a component based game? Where I get confused is that the shield obviously has three components associated with it. Damage reduction / filtering A sprite A collider. To make it worse different shield variations could have even more behaviors, all of which could be components: boost player maximum health health regen projectile deflection etc Am I overthinking this? Should the shield just be a super component? I really think this is wrong answer. So if you think this is the way to go please explain. Should the shield be its own entity that tracks the location of the player? That might make it hard to implement the damage filtering. It also kinda blurs the lines between attached components and entities. Should the shield be a component that houses other components? I've never seen or heard of anything like this, but maybe it's common and I'm just not deep enough yet. Should the shield just be a set of components that get added to the player? Possibly with an extra component to manage the others, e.g. so they can all be removed as a group. (accidentally leave behind the damage reduction component, now that would be fun). Something else that's obvious to someone with more component experience?

    Read the article

  • The best way to structure/design game code

    - by Edward
    My question is quite broad and related to the 2D game code design/architecture/structure. Usually the main game consists of the main loop where you update & render your world states. However, it's recommended for many purposes to separate rendering from the game-logic and so on. I am kinda confused about the whole situation. Many game engines/libs/sdks don't follow separation schema. They propagate a way where you define some scenes/stages and they contain some objects and the scene/stage controls the user input and so on. For example, in cocos2d(-x) and libgdx (stage2d) the games are usually done the way that the update logic happens at the same time/place as rendering. Also, the propagated way is to have a structure where an object knows how to draw itself - which is not a separation of updating & rendering. The same with Flash based games, they are usually done the way when an object (class) contains a swf or a texture and some data and holds some update logic itself, or updated from main Scene. And again this object already knows how to draw itself via "addChild". Also, some people recommend to use MVC pattern, which will require to completely obey the structure of those engines/libs/sdks. Maybe I am overthinking everything, but I am totally confused. I would be grateful if somebody could point me to a correct direction with the game code structures. What is your way of doing things in libgdx/cocos2d/flash?

    Read the article

  • What functionality of an iPad can I use with Ubuntu?

    - by Andrew Ferrier
    What functionality of an iPad can I use with Ubuntu? I'm thinking about buying an iPad, but I only have Ubuntu PCs these days (no Windows, no Mac), and I'm nervous that it may be too reliant on the existence of iTunes. I'm less concerned about getting media onto and off it (and I have read that I can do this with libimobiledevice), but will I be able to activate it with iTunes? Does it need to be USB synced on a regular basis, or can I do most anything I want through the cloud (i.e. over wifi)? Edit: In answer to jrgifford's question, in theory, I'd like to do everything I would want to do with it were I using Windows/Mac. But more specifically, I think I'd want to: "Activate" it, if that's necessary Copy music / video onto it Install apps How much of this can I do without a Windows/Mac PC? Is it necessary to activate it? I don't think I'm that interested in backing it up (what would I back up?), but then again, maybe I'm confused as to how easy it is to get data on and off. Can I get a regular SFTP app for the iPad to copy data to my Ubuntu machine over wi-fi, for example?

    Read the article

  • I cannot solve the "Install these packages without verification" problem

    - by Yonatan Orlev
    I Googled and Googled and I just cannot find a solution to this problem: sudo apt-get install <whatever> Gives me: WARNING: The following packages cannot be authenticated! and Install these packages without verification [y/N]? I cannot find a decent solution. The closest I got was to run: sudo apt-get install debian-keyring debian-archive-keyring But then, even thought, and against my good judgment I agreed to install the package without confirmation, I get: (I replaced http with XXXX because of forum limitations). Install these packages without verification [y/N]? y Err XXXX://il.archive.ubuntu.com gutsy/universe debian-archive-keyring 2007.02.19-0.1 404 Not Found Err XXXX://il.archive.ubuntu.com gutsy/universe debian-keyring 2005.05.28 404 Not Found Failed to fetch XXXX://il.archive.ubuntu.com/ubuntu/pool/universe/d/debian-archive-keyring/debian-archive-keyring_2007.02.19-0.1_all.deb 404 Not Found Failed to fetch XXXX://il.archive.ubuntu.com/ubuntu/pool/universe/d/debian-keyring/debian-keyring_2005.05.28_all.deb 404 Not Found E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing? Trying to run apt-get update also does not help: I get tons of "404 Not Found" errors. Can someone please direct me to a good solution to this problem? I cannot understand why this issue is not better documented. There must be a simple solution which allows me to update my list of sources or whatever.

    Read the article

  • How to blend multiple normal maps?

    - by János Turánszki
    I want to achieve a distortion effect which distorts the full screen. For that I spawn a couple of images with normal maps. I render their normal map part on some camera facing quads onto a rendertarget which is cleared with the color (127,127,255,255). This color means that there is no distortion whatsoever. Then I want to render some images like this one onto it: If I draw one somewhere on the screen, then it looks correct because it blends in seamlessly with the background (which is the same color that appears on the edges of this image). If I draw another one on top of it then it will no longer be a seamless transition. For this I created a blendstate in directX 11 that keeps the maximum of two colors, so it is now a seamless transition, but this way, the colors lower than 127 (0.5f normalized) will not contribute. I am not making a simulation and the effect looks quite convincing and nice for a game, but in my spare time I am thinking how I could achieve a nicer or a more correct effect with a blend state, maybe averaging the colors somehow? I I did it with a shader, I would add the colors and then I would normalize them, but I need to combine arbitrary number of images onto a rendertarget. This is my blend state now which blends them seamlessly but not correctly: D3D11_BLEND_DESC bd; bd.RenderTarget[0].BlendEnable=true; bd.RenderTarget[0].SrcBlend = D3D11_BLEND_SRC_ALPHA; bd.RenderTarget[0].DestBlend = D3D11_BLEND_INV_SRC_ALPHA; bd.RenderTarget[0].BlendOp = D3D11_BLEND_OP_MAX; bd.RenderTarget[0].SrcBlendAlpha = D3D11_BLEND_ONE; bd.RenderTarget[0].DestBlendAlpha = D3D11_BLEND_ZERO; bd.RenderTarget[0].BlendOpAlpha = D3D11_BLEND_OP_MAX; bd.RenderTarget[0].RenderTargetWriteMask = 0x0f; Is there any way of improving upon this? (PS. I considered rendering each one with a separate shader incementally on top of each other but that would consume a lot of render targets which is unacceptable)

    Read the article

  • Turn-based tile game dynamic item/skill/command script files

    - by user1542
    I want to create a mechanism that could read text script, for example some kind of custom script such as ".skill" or ".item", which maybe contain some sort of simple script like .item Item.Name = "Strength Gauntlet"; Character.STR += 20; .. .skill Skill.Name = "Poison Attack"; Skill.Description = "Steal HP and inflict poison"; Player.HP += 200; Enemy.HP -= 200; Enemy.Status += Status.POISON; It may be different from this, but just want to give some idea of what I desire. However, I do not know how to dynamically parse these things and translate it into working script. For example, in battle scenerio, I should make my game read one of this ".skill" file and apply it to the current enemy, or the current player. How would I do this? Should I go for String parsing? It is like a script engine, but I prefer C# than creating new language, so how would I parse custom files into appropiate status commands? Another problem is, I have also created a command engine which would read String input and parse it into action such as "MOVE (1,2)" would move character to tile (1,2). Each command belong to separate class, and provide generic parsing method that should be implemented by hand. This is for the reason of custom number/type of arguments per each command. However, I think this is not well designed, because I desire it to automatically parse the parameters according to specify list of types. For example, MOVE command in "MOVE 1 2" would automatically parse the parameters into int, int and put it into X and Y. Now, this form can change, and we should be able to manually specify all type of formats. Any suggestion to this problem? Should I change from string parsing to some hardcode methods/classes?

    Read the article

  • How broad should a computer science/engineering student go?

    - by AskQuestions
    I have less than 2 years of college left and I still don't know what to focus on. But this is not about me, this is about being a future developer. I realize that questions like "Which language should I learn next?" are not really popular, but I think my question is broader than that. I often see people write things like "You have to learn many different things. Being a developer is not about learning one programming language / technology and then doing that for the rest of your life". Well, sure, but it's impossible to really learn everything thoroughly. Does that mean that one should just learn the basics of everything and then learn some things more thoroughly AFTER getting a particular job? I mean, the best way to learn programming is by actually programming stuff... But projects take time. Does an average developer really switch between (for example) being a web developer, doing artificial intelligence and machine learning related stuff and programming close to the hardware? I mean, I know a lot of different things, but I don't feel proficient in any of those things. If I want to find a job as a web developer (that's just an example) after I finish college, shouldn't I do some web related project (maybe using something I still don't know) rather than try to learn functional programming? So, the question is: How broad should a computer science student's field of focus be? One programming language is surely far too narrow, but what is too broad?

    Read the article

  • Does the use of mongodb enhance extending/changing database driven applications?

    - by developer10214
    When an application is created which need to store data, an SQL database is used very often. So did I in a lot of asp.net applications. The resulting applications have often an ORM like the entity framework and maybe a business layer. So when such an application needs to be extended(let's say you have to add a comment property to an object), you have to change/extend the database, then the ORM and the business layer and so on. To deploy the changes you have to update the target database and the application. I know that things like code first and fluent can make this approach easier. I tried mongodb, I only used the standard driver and I had to extend some objects and all I had to do was changing the code. So it feels that such approaches are much easier to realize when using mongodb. I don't have much experience with larger applications an mongodb. I know that a SQL database or mongodb doesn't fit for all needs and both have their pros and cons. I want to know if my feeling is right, if yes I would choose rather choose mongodb than SQL database.

    Read the article

  • Should a programmer "think" for the client?

    - by P.Brian.Mackey
    I have gotten to the point where I hate requirements gathering. Customer's are too vague for their own good. In an agile environment, where we can show the client a piece of work to completion it's not too bad as we can make small regular corrections/updates to functionality. In a "waterfall" type in environment (requirements first, nearly complete product next) things can get ugly. This kind of environment has led me to constantly question requirements. E.G. Customer wants "automatically convert input to the number 1" (referring to a Qty in an order). But what they don't think about is that "input" could be a simple type-o. An "x" in a textbox could be a "woops" not I want 1 of those "toothpaste" products. But, there's so much in the air with requirements that I could stand and correct for hours on end smashing out what they want. This just isn't healthy. Working for a corporation, I could try to adjust the culture to fit the agile model that would help us (no small job, above my pay grade). Or, sweep ugly details under the rug and hope for the best. Maybe my customer is trying to get too close to the code? How does one handle the problem of "thinking for the client" without pissing them off with too many questions?

    Read the article

  • What are the legal considerations when forking a BSD-licensed project?

    - by Thomas Owens
    I'm interested in forking a project released under a two-clause BSD license: Copyright (c) 2010 {copyright holder} All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: (1) Redistributions of source code must retain the above copyright notice, this list of conditions and the disclaimer at the end. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. (2) Neither the name of {copyright holder} nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. DISCLAIMER THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. I've never forked a project before, but this project is very similar to something that I need/want. However, I'm not sure how far I'll get, so my plan is to pull the latest from their repository and start working. Maybe, eventually, I'll get it to where I want it, and be able to release it. Is this the right approach? How, exactly, does this impact forking of the project? How do I track who owns what components or sections (what's copyright me, what's copyright the original creators, once I start stomping over their code base)? Can I fork this project? What must I do prior to releasing, and when/if I decide to release the software derived from this BSD-licensed work?

    Read the article

  • "io: postinst-must-call-ldconfig" when creating a package

    - by egarcia
    I'm trying to create an ubuntu .deb package for the (pretty awesome) Io Language. I am not the developer of that language, so I'm not familiar with its sourcecode yet. This is my first attempt at creating a .deb file. In order to create the .deb, I'm following these instructions: http://www.webupd8.org/2010/01/how-to-create-deb-package-ubuntu-debian.html So far I've been able to create a .deb file (io_2010.06.01-1_amd64.deb) and a changes file (io_201.06.01-1_amd64.changes). I'm using lintian to check the changes file, and it reports an issue I don't know how to resolve: $ lintian -Ivi io_2010.06.01-1_amd64.changes ... (lots of messages) I: io: no-symbols-control-file usr/lib/libiovmall.so I: io: no-symbols-control-file usr/lib/libgarbagecollector.so I: io: no-symbols-control-file usr/lib/libbasekit.so E: io: postinst-must-call-ldconfig usr/lib/libiovmall.so N: N: The package installs shared libraries in a directory controlled by the N: dynamic library loader. Therefore, the package must call "ldconfig" in N: its postinst script. N: N: Refer to Debian Policy Manual section 8.1.1 (ldconfig) for details. N: N: Severity: serious, Certainty: certain N: N: Removing /tmp/OYuNShEHYz ... I've read the debian manual 8.8 section. I think I understand what the problem is (I need to make sure that ldconfig is invoked "somewhere", possibly on a place called "posinst") but I don't know how to resolve it (i.e. where this "posinsts" file is and how should I change it). The current way of installing Io in Ubuntu is basically running sudo make install and then sudo ldconfig. Maybe the makefile should be modified so ldconfig is called from it? I don't know. Thanks a lot.

    Read the article

  • What the Hekaton?

    - by Tony Davis
    Hekaton, the power behind SQL Server 2014′s In-Memory OLTP technology, is intended to make data operations run orders of magnitude faster on SQL Server. This works its magic partly by serving database workloads entirely from main memory, using memory-optimized table structures. It replaces the relational engine’s standard locking model with an optimistic concurrency model based on time-stamped row versions. Deeper down the Hekaton engine uses new, ‘latch free’ data structures. So far, so good, but performance improvements on this scale require a compromise, and the compromise is that these aren’t tables as we understand them. For the database developer, these differences are painful because they involve sacrificing some very important bits of the relational model. Most importantly, Hekaton tables don’t currently support FOREIGN KEY constraints or CHECK constraints, and you can’t put the checks in triggers because there aren’t any DML triggers either. Constraints allow a relational designer to enforce relational integrity and data integrity. Without them, of course, ‘bad data’ can get into our Hekaton tables. There is no easy way of preventing it. For several classes of database and data, this is a show-stopper. One may regard all these restrictions regretfully, seeing limited opportunity to try out Hekaton with current databases, but perhaps there is also a sudden glow of recognition. Isn’t this how we all originally imagined table variables were going to be, back in SQL 2005? And they have much the same restrictions. Maybe, instead of pretending that a currently-designed database can be ‘Hekatonized’ with a few mouse clicks, we should redesign databases for SQL 2014 to replace table variables with Hekaton tables, exploiting this technology for fast intermediate processing, and for the most part forget, for now, the idea of trying to convert our base relational tables into Hekaton tables. Few database developers would be averse to having their working tables running an order of magnitude faster, as long as it didn’t compromise the integrity of the data in the base tables.

    Read the article

< Previous Page | 205 206 207 208 209 210 211 212 213 214 215 216  | Next Page >