Search Results

Search found 10352 results on 415 pages for 'context switching'.

Page 235/415 | < Previous Page | 231 232 233 234 235 236 237 238 239 240 241 242  | Next Page >

  • Exception Handling And Other Contentious Political Topics

    - by Justin Jones
    So about three years ago, around the time of my last blog post, I promised a friend I would write this post. Keeping promises is a good thing, and this is my first step towards easing back into regular blogging. I fully expect him to return from Pennsylvania to buy me a beer over this. However, it’s been an… ahem… eventful three years or so, and blogging, unfortunately, got pushed to the back burner on my priority list, along with a few other career minded activities. Now that the personal drama of the past three years is more or less resolved, it’s time to put a few things back on the front burner. What I consider to be proper exception handling practices is relatively well known these days. There are plenty of blog posts out there already on this topic which more or less echo my opinions on this topic. I’ll try to include a few links at the bottom of the post. Several years ago I had an argument with a co-worker who posited that exceptions should be caught at every level and logged. This might seem like sanity on the surface, but the resulting error log looked something like this: Error: System.SomeException Followed by small stack trace. Error: System.SomeException Followed by slightly bigger stack trace. Error: System.SomeException Followed by slightly bigger stack trace. Error: System.SomeException Followed by slightly bigger stack trace. Error: System.SomeException Followed by slightly bigger stack trace. Error: System.SomeException Followed by slightly bigger stack trace. Error: System.SomeException Followed by slightly bigger stack trace. Error: System.SomeException Followed by slightly bigger stack trace.   These were all the same exception. The problem with this approach is that the error log, if you run any kind of analytics on in, becomes skewed depending on how far up the stack trace your exception was thrown. To mitigate this problem, we came up with the concept of the “PreLoggedException”. Basically, we would log the exception at the very top level and subsequently throw the exception back up the stack encapsulated in this pre-logged type, which our logging system knew to ignore. Now the error log looked like this: Error: System.SomeException Followed by small stack trace. Much cleaner, right? Well, there’s still a problem. When your exception happens in production and you go about trying to figure out what happened, you’ve lost more or less all context for where and how this exception was thrown, because all you really know is what method it was thrown in, but really nothing about who was calling the method or why. What gives you this clue is the entire stack trace, which we’re losing here. I believe that was further mitigated by having the logging system pull a system stack trace and add it to the log entry, but what you’re actually getting is the stack for how you got to the logging code. You’re still losing context about the actual error. Not to mention you’re executing a whole slew of catch blocks which are sloooooooowwwww……… In other words, we started with a bad idea and kept band-aiding it until it didn’t suck quite so bad. When I argued for not catching exceptions at every level but rather catching them following a certain set of rules, my co-worker warned me “do yourself a favor, never express that view in any future interviews.” I suppose this is my ultimate dismissal of that advice, but I’m not too worried. My approach for exception handling follows three basic rules: Only catch an exception if 1. You can do something about it. 2. You can add useful information to it. 3. You’re at an application boundary. Here’s what that means: 1. Only catch an exception if you can do something about it. We’ll start with a trivial example of a login system that uses a file. Please, never actually do this in production code, it’s just concocted example. So if our code goes to open a file and the file isn’t there, we get a FileNotFound exception. If the calling code doesn’t know what to do with this, it should bubble up. However, if we know how to create the file from scratch we can create the file and continue on our merry way. When you run into situations like this though, What should really run through your head is “How can I avoid handling an exception at all?” In this case, it’s a trivial matter to simply check for the existence of the file before trying to open it. If we detect that the file isn’t there, we can accomplish the same thing without having to handle in in a catch block. 2. Only catch an exception if you can do something about it. Continuing with the poorly thought out file based login system we contrived in part 1, if the code calls a Login(…) method and the FileNotFound exception is thrown higher up the stack, the code that calls Login must account for a FileNotFound exception. This is kind of counterintuitive because the calling code should not need to know the internals of the Login method, and the data file is an implementation detail. What makes more sense, assuming that we didn’t implement any of the good advice from step 1, is for Login to catch the FileNotFound exception and wrap it in a new exception. For argument’s sake we’ll say LoginSystemFailureException. (Sorry, couldn’t think of anything better at the moment.) This gives us two stack traces, preserving the original stack trace in the inner exception, and also is much more informative to the calling code. 3. Only catch an exception if you’re at an application boundary. At some point we have to catch all the exceptions, even the ones we don’t know what to do with. WinForms, ASP.Net, and most other UI technologies have some kind of built in mechanism for catching unhandled exceptions without fatally terminating the application. It’s still a good idea to somehow gracefully exit the application in this case if possible though, because you can no longer be sure what state your application is in, but nothing annoys a user more than an application just exploding. These unhandled exceptions need to be logged, and this is a good place to catch them. Ideally you never want this option to be exercised, but code as though it will be. When you log these exceptions, give them a “Fatal” status (e.g. Log4Net) and make sure these bugs get handled in your next release. That’s it in a nutshell. If you do it right each exception will only get logged once and with the largest stack trace possible which will make those 2am emergency severity 1 debugging sessions much shorter and less frustrating. Here’s a few people who also have interesting things to say on this topic:  http://blogs.msdn.com/b/ericlippert/archive/2008/09/10/vexing-exceptions.aspx http://www.codeproject.com/Articles/9538/Exception-Handling-Best-Practices-in-NET I know there’s more but I can’t find them at the moment.

    Read the article

  • TGIF: Engagement Wrap-up

    - by Michael Snow
    We've had a very busy week here at Oracle and as we build up to Oracle OpenWorld starting in less than 10 days - it doesn't look like things will be slowing down. Engagement is definitely in the air this week. Our friend, John Mancini published a great article entitled: "The World of Engagement" on his Digital Landfill blog yesterday and we hosted a great webcast with R "Ray" Wang from Constellation Research yesterday on the "9 C's of Engagement". 12.00 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} I wanted to wrap-up the week with some key takeaways from our webcast yesterday with Ray Wang. If you missed the webcast yesterday, fear not - it is now available  On-Demand. We'll leave you this week with lots of questions about how to navigate these churning waters of engagement. Stay tuned to the Oracle WebCenter Social Business Thought Leaders Webcast Series as we fuel this dialogue. 12.00 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Company Culture Does company support a culture of putting customer satisfaction ahead of profits? Does culture promote creativity and cross functional employee collaboration? Does culture accept different views of multi-generational workforce? Does culture promote employee training and skills development Does culture support upward mobility and long term retention? Does culture support work-life balance? Does the culture provide rewards for employee for outstanding customer support? Channels What are the current primary channels for customer communications? What do you think will be the primary channels in two years? Is company developing support model for emerging channels? Do all channels consistently deliver the same level of customer support? Do you know the cost per transaction across all channels? Do you engage customers proactively across multiple channels? Do all channels have access to the same customer information? Community Does company extend customer support into virtual communities of interest? Does company facilitate educating users through its virtual communities? Does company mine its customer’s experience into useful data? Does company increase the value for customers through using data to deliver new products and services? Does company support two way interactions with its customers through communities of interest? Does company actively support social CRM, online communities and social media markets? Credibility Does company market its trustworthiness through external certificates such as business licenses, BBB certificates or other validations? Does company promote trust through customer testimonials and case studies on ethical business practices? Does company promote truthful market campaigns Does company make it easy for customers to complain? Does company build its reputation for standing behind its products with guarantees for satisfaction? Does company protect its customer data with high security measures> Content What sources do you use to create customer content? Does company mine social media and blogs for customer content? How does your company sort, store and retain its customer content? How frequently does content get updated? What external sources do you use for customer content? How many responses are typically received from a knowledge management system inquiry? Does your company use customer content to design and develop new product and services? Context Does your company market to customers in clusters or individually? Does your company customize its messages and personalize them to specific needs of each individual customer? Does your company store customer data based on their past behaviors, purchases, sentiment analysis and current activities? Does your company manage customer context according to channels used? For example identify personal use channels versus business channels? What is your frequency of collecting customer activities across various touch points? How is your customer data stored and analyzed? Is contextual data used for future customer outreach? Cadence Which channels does your company measure-web site visits, phone calls, IVR, store visits, face to face, social media? Does company make effective use of cross channel marketing to promote more frequent customer engagement? Does your company rate the patterns relevant for your product or service and monitor usage against this pattern? Does your company measure the frequency of both online and offline channels? Does your company apply metrics to the frequency of customer engagements with product or services revenues? Does your company consolidate data for customer engagement across various channels for a complete view of its customer? Catalyst Does company offer coupon discounts? Does company have a customer loyalty program or a VIP membership program? Does company mine customer data to target specific groups of buyers? Do internal employees serve as ambassadors for customer programs? Does company drive loyalty through social media loyalty programs? Does company build rewards based on using loyalty data? Does company offer an employee incentive program to drive customer loyalty?

    Read the article

  • Disaster, or Migration?

    - by Rob Farley
    This post is in two parts – technical and personal. And I should point out that it’s prompted in part by this month’s T-SQL Tuesday, hosted by Allen Kinsel. First, the technical: I’ve had a few conversations with people recently about migration – moving a SQL Server database from one box to another (sometimes, but not primarily, involving an upgrade). One question that tends to come up is that of downtime. Obviously there will be some period of time between the old server being available and the new one. The way that most people seem to think of migration is this: Build a new server. Stop people from using the old server. Take a backup of the old server Restore it on the new server. Reconfigure the client applications (or alternatively, configure the new server to use the same address as the old) Make the new server online. There are other things involved, such as testing, of course. But this is essentially the process that people tell me they’re planning to follow. The bit that I want to look at today (as you’ve probably guessed from my title) is the “backup and restore” section. If a SQL database is using the Simple Recovery Model, then the only restore option is the last database backup. This backup could be full or differential. The transaction log never gets backed up in the Simple Recovery Model. Instead, it truncates regularly to stay small. One that’s using the Full Recovery Model (or Bulk-Logged) won’t truncate its log – the log must be backed up regularly. This provides the benefit of having a lot more option available for restores. It’s a requirement for most systems of High Availability, because if you’re making sure that a spare box is up-and-running, ready to take over, then you have to be interested in the logs that are happening on the current box, rather than truncating them all the time. A High Availability system such as Mirroring, Replication or Log Shipping will initialise the spare machine by restoring a full database backup (and maybe a differential backup if available), and then any subsequent log backups. Once the secondary copy is close, transactions can be applied to keep the two in sync. The main aspect of any High Availability system is to have a redundant system that is ready to take over. So the similarity for migration should be obvious. If you need to move a database from one box to another, then introducing a High Availability mechanism can help. By turning on the Full Recovery Model and then taking a backup (so that the now-interesting logs have some context), logs start being kept, and are therefore available for getting the new box ready (even if it’s an upgraded version). When the migration is ready to occur, a failover can be done, letting the new server take over the responsibility of the old, just as if a disaster had happened. Except that this is a planned failover, not a disaster at all. There’s a fine line between a disaster and a migration. Failovers can be useful in patching, upgrading, maintenance, and more. Hopefully, even an unexpected disaster can be seen as just another failover, and there can be an opportunity there – perhaps to get some work done on the principal server to increase robustness. And if I’ve just set up a High Availability system for even the simplest of databases, it’s not necessarily a bad thing. :) So now the personal: It’s been an interesting time recently... June has been somewhat odd. A court case with which I was involved got resolved (through mediation). I can’t go into details, but my lawyers tell me that I’m allowed to say how I feel about it. The answer is ‘lousy’. I don’t regret pursuing it as long as I did – but in the end I had to make a decision regarding the commerciality of letting it continue, and I’m going to look forward to the days when the kind of money I spent on my lawyers is small change. Mind you, if I had a similar situation with an employer, I’d do the same again, but that doesn’t really stop me feeling frustrated about it. The following day I had to fly to country Victoria to see my grandmother, who wasn’t expected to last the weekend. She’s still around a week later as I write this, but her 92-year-old body has basically given up on her. She’s been a Christian all her life, and is looking forward to eternity. We’ll all miss her though, and it’s hard to see my family grieving. Then on Tuesday, I was driving back to the airport with my family to come home, when something really bizarre happened. We were travelling down the freeway, just pulled out to go past a truck (farm-truck sized, not a semi-trailer), when a car-sized mass of metal fell off it. It was something like an industrial air-conditioner, but from where I was sitting, it was just a mass of spinning metal, like something out of a movie (one friend described it as “holidays by Michael Bay”). Somehow, and I’m really don’t know how, the part of it nearest us bounced high enough to clear the car, and there wasn’t even a scratch. We pulled over the check, and I was just thanking God that we’d changed lanes when we had, and that we remained unharmed. I had all kinds of thoughts about what could’ve happened if we’d had something that size land on the windscreen... All this has drilled home that while I feel that I haven’t provided as well for the family as I could’ve done (like by pursuing an expensive legal case), I shouldn’t even consider that I have proper control over things. I get to live life, and make decisions based on what I feel is right at the time. But I’m not going to get everything right, and there will be things that feel like disasters, some which could’ve been in my control and some which are very much beyond my control. The case feels like something I could’ve pursued differently, a disaster that could’ve been avoided in some way. Gran dying is lousy of course. An accident on the freeway would have been awful. I need to recognise that the worst disasters are ones that I can’t affect, and that I need to look at things in context – perhaps seeing everything that happens as a migration instead. Life is never the same from one day to the next. Every event has a before and an after – sometimes it’s clearly positive, sometimes it’s not. I remember good events in my life (such as my wedding), and bad (such as the loss of my father when I was ten, or the back injury I had eight years ago). I’m not suggesting that I know how to view everything from the “God works all things for good” perspective, but I am trying to look at last week as a migration of sorts. Those things are behind me now, and the future is in God’s hands. Hopefully I’ve learned things, and will be able to live accordingly. I’ve come through this time now, and even though I’ll miss Gran, I’ll see her again one day, and the future is bright.

    Read the article

  • Ubuntu won't connect to wired network

    - by djeikyb
    I'm running 10.04, upgraded from 9.10, maybe, but probably not upgraded from 9.04. I have two wifi routers. Zeus is connected to the dsl modem. Hermes uses a wds bridge with Zeus to extend the network. My desktop (Daedalus) is ethernetted to Hermes. My laptop (Clyde) is wifi, switching to Hermes or Zeus as needed. Occasionally, as in whenever I transfer a large file from desktop to laptop, the wds bridge will die. Fixing it means restarting both routers, though it seems Hermes should boot first. This is ridiculous, and eventually I'll get around to asking you guys to help me stop it from happening. More importantly is that my desktop requires a reboot to get back on the network. WTF. ifconfig shows my nic has no ip. /etc/init.d/networking restart doesn't do anything, not even give me a lousy ip. dhcpcd eth1 grants me an ip address, but doesn't help with internet access. route -n shows what looks like my normal routing table, but pinging google.com informs me it's an unknown host. jake@daedalus:~$ route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 10.1.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth1 0.0.0.0 10.1.1.1 0.0.0.0 UG 0 0 0 eth1 It may be worth noting that I can ping both Zeus (10.1.1.1) and Hermes (10.1.1.4) and my laptop (10.1.1.55). Much obliged for any help. Rebooting is, well, trivial in this instance. But it's stupid. I switched to linux because I like the idea that if one part breaks, you fix it instead of reboot reboot reboot. I've left my poor desktop in disarray, confining myself to my little netbook. My desktop is broken, awaiting magical commands from you brilliant folk. (and yes, i know clyde the netbook should be named icarus. it was its original name. ironically the ssd burned out, and i felt it wasn't right when it came to reinstalling)

    Read the article

  • changing drive nodes & hdparm

    - by Kalamalka Kid
    I am currently attempting to create a command that works at startup to kill the power on two of my very noisy hard drives. I have edited the etc/rc.local file to include this command: sudo hdparm -y /dev/sdc sudo hdparm -y /dev/sdd exit 0 While I think this should work, it seems the allocated drives keep switching around every time I reboot. I have sda, sdb, sdc, sdd, and sde but they keep getting jumbled around (making the drive I wish to shut different than sdd which is making the task of shutting down the right drive on start-up quite cumbersome. I had a perfectly functioning ftstab file working which disappeard, but I restored it from the back up into the etc/ dir: # <file system> <mount point> <type> <options> <dump> <pass> #Entry for /dev/sda1 : UUID=43c09daf-08a5-44f2-89b0-fc7c6f0d1e67 / ext4 errors=remount-ro 0 1 #Entry for /dev/sdd1 : UUID=443AFBAD7FE50945 /media/DX100 ntfs-3g defaults,nosuid,nodev,locale=en_CA.UTF-8 0 0 #Entry for /dev/sdb1 : UUID=FCE456F5E456B21E /media/GalaxyM83 ntfs-3g defaults,nosuid,nodev,locale=en_CA.UTF-8 0 0 #Entry for /dev/sdf1 : UUID=1CA057FDA057DBB8 /media/Holideck ntfs-3g defaults,nosuid,nodev,locale=en_CA.UTF-8 0 0 #Entry for /dev/sdc1 : UUID=7ABB49654B799D40 /media/JX3P ntfs defaults,nosuid,nodev,locale=en_CA.UTF-8 0 0 it seems every time I boot the order of the drives changes. I do not know how to resolve this. A quick workaround the problem was to go with UUID instead of the DEV letter by editing the etc/rc.local file to include: hdparm -y /dev/disk/by-uuid/443AFBAD7FE50945 hdparm -y /dev/disk/by-uuid/7ABB49654B799D40 So I thought I was in the clear, as I heard both hard drives die down during the boot sequence, BUT, as soon as I log in both drives start up again! so now I have to figure out what is making them start up again after log in, or perhaps another way to get them to turn off. Is there some kind of command i can get to execute after log in? I tried editing the startup applications to include an autossh with: autoshh - sudo hdparm -y /dev/disk/by-uuid/7ABB49654B799D40 autoshh - sudo hdparm -y /dev/disk/by-uuid/443AFBAD7FE50945 but this did not seem to work to turn off the disks after log in.

    Read the article

  • OpenGLES GLSL Shader attributes always bound to 0

    - by codemonkey
    So I have a very simple vertex shader as follows #version 120 attribute vec3 position; attribute vec3 inColor; uniform mat4 mvp; varying vec3 fragColor; void main(void){ fragColor = inColor; gl_Position = mvp * vec4(position, 1.0); } Which I load, as well as the fragment shader: #version 120 varying vec3 fragColor; void main(void) { gl_FragColor = vec4(fragColor,1.0); } Which I then load, compile, and link to my shader program. I check for link status using glGetProgramiv(shaderProgram, GL_LINK_STATUS, &shaderSuccess); which returns GL_TRUE so I think its ok. However, when I query the active attributes and uniforms using #ifdef DEBUG int totalAttributes = -1; glGetProgramiv(shaderProgram, GL_ACTIVE_ATTRIBUTES, &totalAttributes); for(int i=0; i<totalAttributes; ++i) { int name_len=-1, num=-1; GLenum type = GL_ZERO; char name[100]; glGetActiveAttrib(shaderProgram, GLuint(i), sizeof(name)-1, &name_len, &num, &type, name ); name[name_len] = 0; GLuint location = glGetAttribLocation(shaderProgram, name); fprintf(stderr, "Attribute %s is bound at %d\n", name, location); } int totalUniforms = -1; glGetProgramiv(shaderProgram, GL_ACTIVE_UNIFORMS, &totalUniforms); for(int i=0; i<totalUniforms; ++i) { int name_len=-1, num=-1; GLenum type = GL_ZERO; char name[100]; glGetActiveUniform(shaderProgram, GLuint(i), sizeof(name)-1, &name_len, &num, &type, name ); name[name_len] = 0; GLuint location = glGetUniformLocation(shaderProgram, name); fprintf(stderr, "Uniform %s is bound at %d\n", name, location); } #endif I get: Attribute inColor is bound at 0 Attribute position is bound at 1 Uniform mvp is bound at 0 Which leads to failure when trying to use the shader to render the objects. I have tried switching the order of declaration of position & inColor, but still, only position is bound with the other two giving 0 Can someone please explain why this is happening? Thanks

    Read the article

  • Creating smooth lighting transitions using tiles in HTML5/JavaScript game

    - by user12098
    I am trying to implement a lighting effect in an HTML5/JavaScript game using tile replacement. What I have now is kind of working, but the transitions do not look smooth/natural enough as the light source moves around. Here's where I am now: Right now I have a background map that has a light/shadow spectrum PNG tilesheet applied to it - going from darkest tile to completely transparent. By default the darkest tile is drawn across the entire level on launch, covering all other layers etc. I am using my predetermined tile sizes (40 x 40px) to calculate the position of each tile and store its x and y coordinates in an array. I am then spawning a transparent 40 x 40px "grid block" entity at each position in the array The engine I'm using (ImpactJS) then allows me to calculate the distance from my light source entity to every instance of this grid block entity. I can then replace the tile underneath each of those grid block tiles with a tile of the appropriate transparency. Currently I'm doing the calculation like this in each instance of the grid block entity that is spawned on the map: var dist = this.distanceTo( ig.game.player ); var percentage = 100 * dist / 960; if (percentage < 2) { // Spawns tile 64 of the shadow spectrum tilesheet at the specified position ig.game.backgroundMaps[2].setTile( this.pos.x, this.pos.y, 64 ); } else if (percentage < 4) { ig.game.backgroundMaps[2].setTile( this.pos.x, this.pos.y, 63 ); } else if (percentage < 6) { ig.game.backgroundMaps[2].setTile( this.pos.x, this.pos.y, 62 ); } // etc... (sorry about the weird spacing, I still haven't gotten the hang of pasting code in here properly) The problem is that like I said, this type of calculation does not make the light source look very natural. Tile switching looks too sharp whereas ideally they would fade in and out smoothly using the spectrum tilesheet (I copied the tilesheet from another game that manages to do this, so I know it's not a problem with the tile shades. I'm just not sure how the other game is doing it). I'm thinking that perhaps my method of using percentages to switch out tiles could be replaced with a better/more dynamic proximity forumla of some sort that would allow for smoother transitions? Might anyone have any ideas for what I can do to improve the visuals here, or a better way of calculating proximity with the information I'm collecting about each tile? (PS: I'm reposting this from Stack Overflow at someone's suggestion, sorry about the duplicate!)

    Read the article

  • parallel_for_each from amp.h – part 1

    - by Daniel Moth
    This posts assumes that you've read my other C++ AMP posts on index<N> and extent<N>, as well as about the restrict modifier. It also assumes you are familiar with C++ lambdas (if not, follow my links to C++ documentation). Basic structure and parameters Now we are ready for part 1 of the description of the new overload for the concurrency::parallel_for_each function. The basic new parallel_for_each method signature returns void and accepts two parameters: a grid<N> (think of it as an alias to extent) a restrict(direct3d) lambda, whose signature is such that it returns void and accepts an index of the same rank as the grid So it looks something like this (with generous returns for more palatable formatting) assuming we are dealing with a 2-dimensional space: // some_code_A parallel_for_each( g, // g is of type grid<2> [ ](index<2> idx) restrict(direct3d) { // kernel code } ); // some_code_B The parallel_for_each will execute the body of the lambda (which must have the restrict modifier), on the GPU. We also call the lambda body the "kernel". The kernel will be executed multiple times, once per scheduled GPU thread. The only difference in each execution is the value of the index object (aka as the GPU thread ID in this context) that gets passed to your kernel code. The number of GPU threads (and the values of each index) is determined by the grid object you pass, as described next. You know that grid is simply a wrapper on extent. In this context, one way to think about it is that the extent generates a number of index objects. So for the example above, if your grid was setup by some_code_A as follows: extent<2> e(2,3); grid<2> g(e); ...then given that: e.size()==6, e[0]==2, and e[1]=3 ...the six index<2> objects it generates (and hence the values that your lambda would receive) are:    (0,0) (1,0) (0,1) (1,1) (0,2) (1,2) So what the above means is that the lambda body with the algorithm that you wrote will get executed 6 times and the index<2> object you receive each time will have one of the values just listed above (of course, each one will only appear once, the order is indeterminate, and they are likely to call your code at the same exact time). Obviously, in real GPU programming, you'd typically be scheduling thousands if not millions of threads, not just 6. If you've been following along you should be thinking: "that is all fine and makes sense, but what can I do in the kernel since I passed nothing else meaningful to it, and it is not returning any values out to me?" Passing data in and out It is a good question, and in data parallel algorithms indeed you typically want to pass some data in, perform some operation, and then typically return some results out. The way you pass data into the kernel, is by capturing variables in the lambda (again, if you are not familiar with them, follow the links about C++ lambdas), and the way you use data after the kernel is done executing is simply by using those same variables. In the example above, the lambda was written in a fairly useless way with an empty capture list: [ ](index<2> idx) restrict(direct3d), where the empty square brackets means that no variables were captured. If instead I write it like this [&](index<2> idx) restrict(direct3d), then all variables in the some_code_A region are made available to the lambda by reference, but as soon as I try to use any of those variables in the lambda, I will receive a compiler error. This has to do with one of the direct3d restrictions, where only one type can be capture by reference: objects of the new concurrency::array class that I'll introduce in the next post (suffice for now to think of it as a container of data). If I write the lambda line like this [=](index<2> idx) restrict(direct3d), all variables in the some_code_A region are made available to the lambda by value. This works for some types (e.g. an integer), but not for all, as per the restrictions for direct3d. In particular, no useful data classes work except for one new type we introduce with C++ AMP: objects of the new concurrency::array_view class, that I'll introduce in the post after next. Also note that if you capture some variable by value, you could use it as input to your algorithm, but you wouldn’t be able to observe changes to it after the parallel_for_each call (e.g. in some_code_B region since it was passed by value) – the exception to this rule is the array_view since (as we'll see in a future post) it is a wrapper for data, not a container. Finally, for completeness, you can write your lambda, e.g. like this [av, &ar](index<2> idx) restrict(direct3d) where av is a variable of type array_view and ar is a variable of type array - the point being you can be very specific about what variables you capture and how. So it looks like from a large data perspective you can only capture array and array_view objects in the lambda (that is how you pass data to your kernel) and then use the many threads that call your code (each with a unique index) to perform some operation. You can also capture some limited types by value, as input only. When the last thread completes execution of your lambda, the data in the array_view or array are ready to be used in the some_code_B region. We'll talk more about all this in future posts… (a)synchronous Please note that the parallel_for_each executes as if synchronous to the calling code, but in reality, it is asynchronous. I.e. once the parallel_for_each call is made and the kernel has been passed to the runtime, the some_code_B region continues to execute immediately by the CPU thread, while in parallel the kernel is executed by the GPU threads. However, if you try to access the (array or array_view) data that you captured in the lambda in the some_code_B region, your code will block until the results become available. Hence the correct statement: the parallel_for_each is as-if synchronous in terms of visible side-effects, but asynchronous in reality.   That's all for now, we'll revisit the parallel_for_each description, once we introduce properly array and array_view – coming next. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Ubuntu is booting on acer e1-510 laptop, but screen displays nothing

    - by user287602
    Tried loading ubuntu 12.04 (32-bit as well 64 bit) through bootable usb , on a brand new 64-bit acer E1-510 laptop. It shows the 'Try ubuntu without installing screen' and I selected that option. But instead of showing ubuntu's logo(which implies it is loading). I get a blank screen. The screen is on, but it shows nothing. I tried the same on an old model laptop acer aspire 5738 and it worked like a charm. However, I realized that Ubuntu is actually booting on E1-510 and only the display is not working. How did I arrive at this conclusion? When I select on 'Try ubuntu without installing screen', after about 8-10 seconds I see that the WiFi indicator light on the laptop panel switches on(just like when we boot up a Windows OS or even Ubuntu ). I got an idea that the system booted Ubuntu. To confirm this, I tried to adjust volume using keyboard shortcuts. Voila, I can hear the sound of adjusting volume! That means it has already booted Ubuntu. I confirmed this with another step. I pressed the power button once and after two seconds I pressed ENTER. It began the process of switching off and within 5 seconds the laptop was powered off. You may ask, Why is this a confirmation that Ubuntu has booted? This is because in Ubuntu when you press on power button, a dialog box opens with shut down, restart, suspend option- and the shut down option is already selected by default; so all I have to do is press ENTER to shut down. This again proved that Ubuntu was indeed up and running. Unlike previous 'AskUbunutu' posts about Acer e1-510, I must mention that my laptop came WITH the Legacy BIOS mode, so its not really a problem to boot ubuntu from a bootable pendrive. Only the screen is not working. In case you need to know, I am running Windows 7-Ultimate 64 bit on acer e1-510.

    Read the article

  • How to make a queue switches from FIFO mode to priority mode?

    - by enzom83
    I would like to implement a queue capable of operating both in the FIFO mode and in the priority mode. This is a message queue, and the priority is first of all based on the message type: for example, if the messages of A type have higher priority than the messages of the B type, as a consequence all messages of A type are dequeued first, and finally the messages of B type are dequeued. Priority mode: my idea consists of using multiple queues, one for each type of message; in this way, I can manage a priority based on the message type: just take first the messages from the queue at a higher priority and progressively from lower priority queues. FIFO mode: how to handle FIFO mode using multiple queues? In other words, the user does not see multiple queues, but it uses the queue as if it were a single queue, so that the messages leave the queue in the order they arrive when the priority mode is disabled. In order to achieve this second goal I have thought to use a further queue to manage the order of arrival of the types of messages: let me explain better with the following code snippet. int NUMBER_OF_MESSAGE_TYPES = 4; int CAPACITY = 50; Queue[] internalQueues = new Queue[NUMBER_OF_MESSAGE_TYPES]; Queue<int> queueIndexes = new Queue<int>(CAPACITY); void Enqueue(object message) { int index = ... // the destination queue (ie its index) is chosen according to the type of message. internalQueues[index].Enqueue(message); queueIndexes.Enqueue(index); } object Dequeue() { if (fifo_mode_enabled) { // What is the next type that has been enqueued? int index = queueIndexes.Dequeue(); return internalQueues[index].Dequeue(); } if (priority_mode_enabled) { for(int i=0; i < NUMBER_OF_MESSAGE_TYPES; i++) { int currentQueueIndex = i; if (!internalQueues[currentQueueIndex].IsEmpty()) { object result = internalQueues[currentQueueIndex].Dequeue(); // The following statement is fundamental to a subsequent switching // from priority mode to FIFO mode: the messages that have not been // dequeued (since they had lower priority) remain in the order in // which they were queued. queueIndexes.RemoveFirstOccurrence(currentQueueIndex); return result; } } } } What do you think about this idea? Are there better or more simple implementations?

    Read the article

  • Develop web site from existing software or cherry pick and use a web framework?

    - by erisco
    A small team and I are tasked with developing a web site. The client has referenced a particular open source project (we'll call it X) when describing some of the features. Because of this, the team wants to start with X and adapt it to satisfy the client. I have looked at X and its code and, in my opinion, it would be unwise. However, my experience is limited, and could really benefit from the insights of others so that I can figure out what I should be asserting as the right direction for the team. My red flags are going up and this is why. X was developed in the earlier days of PHP; 500 line blocks of code are the norm; global variables are abundant; giant switch cases are the norm for switching between which page is shown. There is no clear mapping between URL and where the code for that page sits. From a feature-set standpoint, X is actually software specialized for a different task and has dozens of features we don't need or have use for that come as core assumptions. We will be unable to adapt X through its plugin system. That said, there are a few features which can be mapped, with some modification, to suit our purposes. I believe this is the attraction the team feels. I would feel comfortable if, instead of using X directly, we lifted what is salvageable and useful to us. We can then use that code, and the same 3rd party libraries X is using, in a new code base built on top of a PHP web framework (particularly Agavi, so you understand what I mean by 'web framework'). The web framework gives us a strong MVC structure and provides the common facilities for web development, or adapters to work with 3rd party libraries that do so. We will also have a clean slate feature-wise to work from, which means we can work additively instead of subtractively. Because the code base is better structured, and contains none of what we don't need, it will be easier to document, which is a critical requirement of our client. So to summarize, the team wants to use X, whereas I want to take the bits we can from X and use a web framework instead. I want to bounce this opinion off of other's experiences so that I can be more informed. Thanks for your insight.

    Read the article

  • Which programming language to get into?

    - by user602479
    I'm ending my third term in a few weeks so I have some spare time coming up. I'd like to spend it seriously digging into programming. My problem: I'm not sure which language to begin with. Just to be clear, I don't want to start a language-y-compared-to-language-z discussion. There are a some other issues that play a major role. In my 5th term I'm going to be participating in a major practical course which will include either Java or C programming. It will take a lot of time and energy, as I found out while talking to a few students who passed the final exams (only 15% pass on their first try). Which practical course I will take is randomly decided. My skills so far are the absolute basics of Java and C programming. I know the different data types and how to handle them, objects, pointers, thread programming, etc. All of that is on a very low level, though. My question now is, what language should I start seriously practicing? Java: I did my first GUIs with this language. I'm familiar with Eclipse but I need a project to work on (which I don't have) to really keep me pushing. Besides that, I don't think it would help me if I have to do C in a year. C: As with Java, I can't think of a personal project to keep me working and keep me interested in programming. If I get assigned to Java in a year, this wouldn't give me any advantages either, would it? (No objects, etc.) Objective-C: I recently came up with this idea. I have a Mac; I'm not really familiar with Xcode but I have one or two personal projects I'd like to work on. Further, I would be working with objects (as in Java) and C language constructs which would both be great for this practical course in a year. What do you think I should begin with? Should I just stick to Java and hope for the best, force myself through C or start (nearly) completely from the beginning with Objective C? Maybe you folks could give me some good advice that would stop me from switching from one language to the next?

    Read the article

  • New computer hangs on shutdown/reboot, how to troubleshoot?

    - by Torben Gundtofte-Bruun
    My system is working perfectly but it freezes during shutdown/reboot/suspend/hibernate: All windows and the menu bar disappear but the desktop wallpaper remains. It doesn't even show the shutdown screen (the one with the animated dots) where I could hit ESC and watch the shutdown console text. The system is brand-new and fully updated using Update Manager. How can I determine what is causing the freeze? Is there a log I can investigate? How can I fix this? I see no obvious cause of the freeze. The only USB attachment is a mouse/keyboard; I don't have any external storage attached; and I don't have any programs running (the machine freezes even when doing shutdown right from the login screen). What I've tried so far: Based on other questions (this, this, and this) that suggest some ACPI settings, I've tried sudo shutdown -h now to see whether the shutdown console text display offers any hints, but the system doesn't even get that far - it still freezes while the screen shown the desktop background image, without any toolbars. Only sudo shutdown --force works, but that's not a solution. Editing the grub menu to add acpi=off to the kernel didn't help. I guess there's not much point in trying the other (lesser) ACPI suggestions? Adding noapic to the grub entry had no discernible effect. Adding nolapic instead did something (I had removed the quiet option) - the system managed to continue further with the shutdown, right until the line Checking for running unattended-upgrades: which were the last characters on the screen. I've also checked the system BIOS, especially regarding power options, but didn't see anything out of the ordinary. Switching the BIOS standby setting from S3 to S1 didn't help. The standby setting can't be disabled, and there are no other ACPI-related settings AFAIK. BIOS reset didn't help. Not surprised; hadn't changed anything. I tried going to a virtual console (CtrlAltF1) as suggested by djeikyb and from there did a shutdown -h now and it froze there too, after this console output. I didn't try killing processes one at a time because I'm still too newbie to figure out how to do that. Booting with kernel 2.6.35.22 rather than 2.6.35.25 didn't help. Disabling the Nvidia drivers didn't help. Booting from Live CD (USB stick in fact) didn't help; it freezes the same way. Booting from Live CD, with acpi=off noapic nolapic didn't help either. Neither did just nolapic. So evidently this is not some custom setting in my install, but some sort of basic issue. MemTest competed in 1 hour without errors.

    Read the article

  • Entity Framework 6: Alpha2 Now Available

    - by ScottGu
    The Entity Framework team recently announced the 2nd alpha release of EF6.   The alpha 2 package is available for download from NuGet. Since this is a pre-release package make sure to select “Include Prereleases” in the NuGet package manager, or execute the following from the package manager console to install it: PM> Install-Package EntityFramework -Pre This week’s alpha release includes a bunch of great improvements in the following areas: Async language support is now available for queries and updates when running on .NET 4.5. Custom conventions now provide the ability to override the default conventions that Code First uses for mapping types, properties, etc. to your database. Multi-tenant migrations allow the same database to be used by multiple contexts with full Code First Migrations support for independently evolving the model backing each context. Using Enumerable.Contains in a LINQ query is now handled much more efficiently by EF and the SQL Server provider resulting greatly improved performance. All features of EF6 (except async) are available on both .NET 4 and .NET 4.5. This includes support for enums and spatial types and the performance improvements that were previously only available when using .NET 4.5. Start-up time for many large models has been dramatically improved thanks to improved view generation performance. Below are some additional details about a few of the improvements above: Async Support .NET 4.5 introduced the Task-Based Asynchronous Pattern that uses the async and await keywords to help make writing asynchronous code easier. EF 6 now supports this pattern. This is great for ASP.NET applications as database calls made through EF can now be processed asynchronously – avoiding any blocking of worker threads. This can increase scalability on the server by allowing more requests to be processed while waiting for the database to respond. The following code shows an MVC controller that is querying a database for a list of location entities:     public class HomeController : Controller     {         LocationContext db = new LocationContext();           public async Task<ActionResult> Index()         {             var locations = await db.Locations.ToListAsync();               return View(locations);         }     } Notice above the call to the new ToListAsync method with the await keyword. When the web server reaches this code it initiates the database request, but rather than blocking while waiting for the results to come back, the thread that is processing the request returns to the thread pool, allowing ASP.NET to process another incoming request with the same thread. In other words, a thread is only consumed when there is actual processing work to do, allowing the web server to handle more concurrent requests with the same resources. A more detailed walkthrough covering async in EF is available with additional information and examples. Also a walkthrough is available showing how to use async in an ASP.NET MVC application. Custom Conventions When working with EF Code First, the default behavior is to map .NET classes to tables using a set of conventions baked into EF. For example, Code First will detect properties that end with “ID” and configure them automatically as primary keys. However, sometimes you cannot or do not want to follow those conventions and would rather provide your own. For example, maybe your primary key properties all end in “Key” instead of “Id”. Custom conventions allow the default conventions to be overridden or new conventions to be added so that Code First can map by convention using whatever rules make sense for your project. The following code demonstrates using custom conventions to set the precision of all decimals to 5. As with other Code First configuration, this code is placed in the OnModelCreating method which is overridden on your derived DbContext class:         protected override void OnModelCreating(DbModelBuilder modelBuilder)         {             modelBuilder.Properties<decimal>()                 .Configure(x => x.HasPrecision(5));           } But what if there are a couple of places where a decimal property should have a different precision? Just as with all the existing Code First conventions, this new convention can be overridden for a particular property simply by explicitly configuring that property using either the fluent API or a data annotation. A more detailed description of custom code first conventions is available here. Community Involvement I blogged a while ago about EF being released under an open source license.  Since then a number of community members have made contributions and these are included in EF6 alpha 2. Two examples of community contributions are: AlirezaHaghshenas contributed a change that increases the startup performance of EF for larger models by improving the performance of view generation. The change means that it is less often necessary to use of pre-generated views. UnaiZorrilla contributed the first community feature to EF: the ability to load all Code First configuration classes in an assembly with a single method call like the following: protected override void OnModelCreating(DbModelBuilder modelBuilder) {        modelBuilder.Configurations            .AddFromAssembly(typeof(LocationContext).Assembly); } This code will find and load all the classes that inherit from EntityTypeConfiguration<T> or ComplexTypeConfiguration<T> in the assembly where LocationContext is defined. This reduces the amount of coupling between the context and Code First configuration classes, and is also a very convenient shortcut for large models. Other upcoming features coming in EF 6 Lots of information about the development of EF6 can be found on the EF CodePlex site, including a roadmap showing the other features that are planned for EF6. One of of the nice upcoming features is connection resiliency, which will automate the process of retying database operations on transient failures common in cloud environments and with databases such as the Windows Azure SQL Database. Another often requested feature that will be included in EF6 is the ability to map stored procedures to query and update operations on entities when using Code First. Summary EF6 is the first open source release of Entity Framework being developed in CodePlex. The alpha 2 preview release of EF6 is now available on NuGet, and contains some really great features for you to try. The EF team are always looking for feedback from developers - especially on the new features such as custom Code First conventions and async support. To provide feedback you can post a comment on the EF6 alpha 2 announcement post, start a discussion or file a bug on the CodePlex site. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • If most of team can't follow the architecture, what do you do?

    - by Chris
    Hi all, I'm working on a greenfields project with two other developers. We're all contractors, and myself and one other just started working on the project while the orginal one has been doing most of the basic framework coding. In the past month, my fellow programmer and I have been just frustrated by the design descisions done by our co-worker. Here's a little background information: The application at face value appeared to be your standard n-layered web application using C# on the 3.5 framework. We have a data layer, business layer and a web interface. But as we got deeper into the project we found some very interesting things that have caused us some troubles. There is a custom data access sqlHelper type base which only accepts dictionary key/valued entries and returns only data tables. There are no entity objects, but there are some massive objects which do everything and then are tossed into session for persitance. The general idea is that the pages (.aspx) don't do anything, while the controls (.ascx) do everything. The general flow is that a client clicks on a button, which goes to a user control base which passes a process request to the 'BLL' class which goes to the page processor, which then goes to a getControlProcessor, which at last actually processes the request. The request itself is made up of a dictionary which is passing a string valued method name, stored procedure name, a control name and possibly a value. All switching of the processing is done by comparing the string values of the control names and method names. Pages are linked together via a common header control that uses a combination of javascript and tables to create a hyperlink effect. And as I found out yesterday, a simple hyperlink between one page and another does not work because of the need to have quite a bit of information in session to determine which control to display on a page. My fellow programmer and I both believe that this is a strange and uncommon approach to web application development. Both of us have been in this business for over five years and neither of us have seen this approach. My question is this, how would we approach our co-worker and voice our concerns and what should we do if he does not want to accept the criteic? We both do not want to insult the work that has been done, but feel that going forward will create a nightmare for development. Thanks for your comments.

    Read the article

  • Draw images with warped triangles on a web server [migrated]

    - by epologee
    The scenario The Flash front end of my current project produces images that a web server needs to combine into a video. Both frame-rate and frame-resolution are sizeable enough that sending an image sequence to the back end is not feasible (in both time and client bandwidth). Instead, we're trying to recreate the image drawing on the back end as well. Correct and slow, or incorrect and fast The problem is that this involves quite a bit of drawing textured triangles, and two solutions we found in Python (here and there) are so inefficient, that the drawing takes about 60 seconds per frame, resulting in a whopping 7,5 hours of processing time for a 30 second clip. Unacceptable. When using a PHP-module to send commands to ImageMagick for image manipulation, the whole process is super fast (tenths of a second per frame), but ImageMagick seems to be unable to draw triangles the way we do it in the front end, so the final results do not match. Unacceptable. What I'm asking here, is if there's someone who would know a way to solve this issue, by any means necessary that would run on a web server. Warping an image Let me explain the process of the front end: Perform a Delaunay calculation on points in an image to get an evenly distributed mesh of triangles. Offset the points/vertices in the mesh, distorting or warping the image. Draw the warped triangles on a new bitmap. We can send the results (coordinates) of steps 1 and 2 to the back end, to then draw the warped triangles and save it to an image on disk (or append as a frame to the video). But that last step is what I need help with. The Question Is there an alternative to ImageMagick that can draw triangles in a bitmap? Is there some other library, like a C library, that would allow us to do this? Or could we achieve this effect more easily by switching back end technologies, like Ruby? (.Net and Java are, unfortunately, not really options right now) Many thanks. EP. P.S. I'd appreciate re-tagging efforts, I don't quite know what labels to put on this question. Thanks!

    Read the article

  • Toshiba wireless is blocked

    - by zorrillo
    I have the same problem, a 32 bits toshiba nb255, it first had the windows 7 bu next I installed the ubuntu 11.04. The wifi does not turn on. I used the following issues the commands rfkill unblock wlan0, sudo ifconfig wlan0 down; they were not able. by setting up the bios in advanced menu, the wireless communication sw in ON, but it did not work also. neither the utilities toshiba nor utilities of ubuntu (wicd, wifi radar). by using gedit to file group, nothing. by installing madwifi packet, nothing. by exporting the wifi driver from windows to ubuntu, by means of the NDISwrapper packet, neither. I put the current scripts of the ouput of the cli root@zorrillo:~# rfkill list 0: phy0: Wireless LAN Soft blocked: no Hard blocked: yes root@zorrillo:~# sudo lspci | grep Atheros 07:00.0 Network controller: Atheros Communications Inc. AR9285 Wireless Network Adapter (PCI-Express) (rev 01) root@zorrillo:~# root@zorrillo:~# ifconfig -a eth0 Link encap:Ethernet direcciónHW 88:ae:1d:47:df:e1 ACTIVO DIFUSIÓN MULTICAST MTU:1500 Métrica:1 Paquetes RX:0 errores:0 perdidos:0 overruns:0 frame:0 Paquetes TX:0 errores:0 perdidos:0 overruns:0 carrier:0 colisiones:0 long.colaTX:1000 Bytes RX:0 (0.0 B) TX bytes:0 (0.0 B) Interrupción:43 Dirección base: 0xe000 lo Link encap:Bucle local Direc. inet:127.0.0.1 Másc:255.0.0.0 Dirección inet6: ::1/128 Alcance:Anfitrión ACTIVO BUCLE FUNCIONANDO MTU:16436 Métrica:1 Paquetes RX:8 errores:0 perdidos:0 overruns:0 frame:0 Paquetes TX:8 errores:0 perdidos:0 overruns:0 carrier:0 colisiones:0 long.colaTX:0 Bytes RX:480 (480.0 B) TX bytes:480 (480.0 B) ppp0 Link encap:Protocolo punto a punto Direc. inet:189.203.115.236 P-t-P:192.168.226.1 Másc:255.255.255.255 ACTIVO PUNTO A PUNTO FUNCIONANDO NOARP MULTICAST MTU:1500 Métrica:1 Paquetes RX:6384 errores:30 perdidos:0 overruns:0 frame:0 Paquetes TX:6893 errores:0 perdidos:0 overruns:0 carrier:0 colisiones:0 long.colaTX:3 Bytes RX:5473081 (5.4 MB) TX bytes:974316 (974.3 KB) wlan0 Link encap:Ethernet direcciónHW 00:26:4d:c3:d0:44 DIFUSIÓN MULTICAST MTU:1500 Métrica:1 Paquetes RX:0 errores:0 perdidos:0 overruns:0 frame:0 Paquetes TX:0 errores:0 perdidos:0 overruns:0 carrier:0 colisiones:0 long.colaTX:1000 Bytes RX:0 (0.0 B) TX bytes:0 (0.0 B) note.-it is logical all the counters are cero if the wireless device is down root@zorrillo:/home/zorrillo/Descargas/802BGA# ifconfig wlan0 up SIOCSIFFLAGS: Operation not possible due to RF-kill root@zorrillo:/home/zorrillo/Descargas/802BGA# It would seem the wireless switching does not react with whichever ubuntu 11.04 command (I hope to be wrong). The target remains the same, in order of the scripts above. I am worried, I have tried to find any answer for days and nights. Toshiba does not supply drivers or soft support for linux, marketing of course. I only see the device is up by protocols and down phisically, My question is, is it possible to enable or not shutdown physically the device?, because in the toshiba model nb255 the wifi is not set up/down bye means of a physical switch, but by means a combination of Fn + F8 (only for windows 7, no one more), Is there one possibility to configure the hot keys in ubuntu?

    Read the article

  • What's new in EJB 3.2 ? - Java EE 7 chugging along!

    - by arungupta
    EJB 3.1 added a whole ton of features for simplicity and ease-of-use such as @Singleton, @Asynchronous, @Schedule, Portable JNDI name, EJBContainer.createEJBContainer, EJB 3.1 Lite, and many others. As part of Java EE 7, EJB 3.2 (JSR 345) is making progress and this blog will provide highlights from the work done so far. This release has been particularly kept small but include several minor improvements and tweaks for usability. More features in EJB.Lite Asynchronous session bean Non-persistent EJB Timer service This also means these features can be used in embeddable EJB container and there by improving testability of your application. Pruning - The following features were made Proposed Optional in Java EE 6 and are now made optional. EJB 2.1 and earlier Entity Bean Component Contract for CMP and BMP Client View of an EJB 2.1 and earlier Entity Bean EJB QL: Query Language for CMP Query Methods JAX-RPC-based Web Service Endpoints and Client View The optional features are moved to a separate document and as a result EJB specification is now split into Core and Optional documents. This allows the specification to be more readable and better organized. Updates and Improvements Transactional lifecycle callbacks in Stateful Session Beans, only for CMT. In EJB 3.1, the transaction context for lifecyle callback methods (@PostConstruct, @PreDestroy, @PostActivate, @PrePassivate) are defined as shown. @PostConstruct @PreDestroy @PrePassivate @PostActivate Stateless Unspecified Unspecified N/A N/A Stateful Unspecified Unspecified Unspecified Unspecified Singleton Bean's transaction management type Bean's transaction management type N/A N/A In EJB 3.2, stateful session bean lifecycle callback methods can opt-in to be transactional. These methods are then executed in a transaction context as shown. @PostConstruct @PreDestroy @PrePassivate @PostActivate Stateless Unspecified Unspecified N/A N/A Stateful Bean's transaction management type Bean's transaction management type Bean's transaction management type Bean's transaction management type Singleton Bean's transaction management type Bean's transaction management type N/A N/A For example, the following stateful session bean require a new transaction to be started for @PostConstruct and @PreDestroy lifecycle callback methods. @Statefulpublic class HelloBean {   @PersistenceContext(type=PersistenceContextType.EXTENDED)   private EntityManager em;    @TransactionAttribute(TransactionAttributeType.REQUIRES_NEW)   @PostConstruct   public void init() {        myEntity = em.find(...);   }   @TransactionAttribute(TransactionAttributeType.REQUIRES_NEW)    @PostConstruct    public void destroy() {        em.flush();    }} Notice, by default the lifecycle callback methods are not transactional for backwards compatibility. They need to be explicitly opt-in to be made transactional. Opt-out of passivation for stateful session bean - If your stateful session bean needs to stick around or it has non-serializable field then the bean can be opt-out of passivation as shown. @Stateful(passivationCapable=false)public class HelloBean {    private NonSerializableType ref = ... . . .} Simplified the rules to define all local/remote views of the bean. For example, if the bean is defined as: @Statelesspublic class Bean implements Foo, Bar {    . . .} where Foo and Bar have no annotations of their own, then Foo and Bar are exposed as local views of the bean. The bean may be explicitly marked @Local as @Local@Statelesspublic class Bean implements Foo, Bar {    . . .} then this is the same behavior as explained above, i.e. Foo and Bar are local views. If the bean is marked @Remote as: @Remote@Statelesspublic class Bean implements Foo, Bar {    . . .} then Foo and Bar are remote views. If an interface is marked @Local or @Remote then each interface need to be explicitly marked explicitly to be exposed as a view. For example: @Remotepublic interface Foo { . . . }@Statelesspublic class Bean implements Foo, Bar {    . . .} only exposes one remote interface Foo. Section 4.9.7 from the specification provide more details about this feature. TimerService.getAllTimers is a newly added convenience API that returns all timers in the same bean. This is only for displaying the list of timers as the timer can only be canceled by its owner. Removed restriction to obtain the current class loader, and allow to use java.io package. This is handy if you want to do file access within your beans. JMS 2.0 alignment - A standard list of activation-config properties is now defined destinationLookup connectionFactoryLookup clientId subscriptionName shareSubscriptions Tons of other clarifications through out the spec. Appendix A provide a comprehensive list of changes since EJB 3.1. ThreadContext in Singleton is guaranteed to be thread-safe. Embeddable container implement Autocloseable. A complete replay of Enterprise JavaBeans Today and Tomorrow from JavaOne 2012 can be seen here (click on CON4654_mp4_4654_001 in Media). The specification is still evolving so the actual property or method names or their actual behavior may be different from the currently proposed ones. Are there any improvements that you'd like to see in EJB 3.2 ? The EJB 3.2 Expert Group would love to hear your feedback. An Early Draft of the specification is available. The latest version of the specification can always be downloaded from here. Java EE 7 Specification Status EJB Specification Project JIRA of EJB Specification JSR Expert Group Discussion Archive These features will start showing up in GlassFish 4 Promoted Builds soon.

    Read the article

  • Wired connection not being recognized

    - by maxifick
    I'm on Ubuntu Maverick (10.10). I've read a few threads regarding wired connection problems, but haven't found a solution yet. The problem appears after I connect to a wireless network. When I disconnect wireless and plug in an internet cable, the wired connection is not recognized at all. Even the socket appears dead (there are no diodes flashing). The only solution so far seems to be restarting the computer. Network Manager then tries to connect to a Wi-Fi, but the wired connection is listed and working. I've tried sudo restart network-manager, but that doesn't solve anything. After a while, available wireless networks start appearing, but the wired still doesn't. Any ideas? Thanks in advance. Edit: Here is the dmesg output after switching off Wi-Fi and then plugging the internet cable. [18200.623543] Restarting tasks ... done. [18200.648422] video LNXVIDEO:00: Restoring backlight state [18200.707580] sky2 0000:02:00.0: eth0: phy I/O error [18200.707715] sky2 0000:02:00.0: eth0: phy I/O error [18200.707819] sky2 0000:02:00.0: eth0: phy I/O error [18200.707922] sky2 0000:02:00.0: eth0: phy I/O error [18200.708025] sky2 0000:02:00.0: eth0: phy I/O error [18200.708127] sky2 0000:02:00.0: eth0: phy I/O error [18200.708229] sky2 0000:02:00.0: eth0: phy I/O error [18200.708332] sky2 0000:02:00.0: eth0: phy I/O error [18200.708824] sky2 0000:02:00.0: eth0: enabling interface [18200.709587] ADDRCONF(NETDEV_UP): eth0: link is not ready [18202.662422] EXT4-fs (sda9): re-mounted. Opts: errors=remount-ro,user_xattr,commit=0 [18203.324061] EXT4-fs (sda9): re-mounted. Opts: errors=remount-ro,user_xattr,commit=0 [18211.193137] eth1: no IPv6 routers present [18212.844649] usb 5-1: new low speed USB device using ohci_hcd and address 5 [18213.017235] input: USB Optical Mouse as /devices/pci0000:00/0000:00:13.0/usb5/5-1/5-1:1.0/input/input16 [18213.017499] generic-usb 0003:0461:4D17.0004: input,hidraw0: USB HID v1.11 Mouse [USB Optical Mouse] on usb-0000:00:13.0-1/input0 After system restart, dmesg says this: [ 19.802126] sky2 0000:02:00.0: eth0: enabling interface [ 19.802394] ADDRCONF(NETDEV_UP): eth0: link is not ready [ 20.812533] device eth0 entered promiscuous mode [ 21.495547] sky2 0000:02:00.0: eth0: Link is up at 100 Mbps, full duplex, flow control rx [ 21.495677] sky2 0000:02:00.0: eth0: Link is up at 100 Mbps, full duplex, flow control rx [ 21.496574] ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready

    Read the article

  • Usage of Pirated software at a company

    - by hakesh
    Hi, I am redirected from stack over flow since the topic is ethics rather than about programming. Thank you for letting me know that. and please give me advices. So, I started to work at a company as an engineer a couple of months ago. It's a small company and what they basically do is answering service on phones. Now they are switching from normal phones to IP phones so that computers take more important place in the work. However, all the computers used by workers are equipped with pirated software ,even their operating systems are. Moreover, they don't even buy one license to make copies for other computers. In other words, they do not spend any money for the software in office. I am not saying copying a licensed one is legit, but the situation is too much. There is one guy who did installing those pirated soft. He does not feel any sense of guilt and even justified when I asked about it. and he is not even a specialist. He just searched on the internet to install pirated software. Our boss does not have any knowledge of computers, so he took cheaper way. How do you guys think about this? Since I am still new to the company, I am not doing maintenance or managing on those cracked computers. But I have to use those software daily. And later on I will be doing support, help desk kind of staff. I really don't want to take responsibility for operating pirated software. and from an aspect of developer and engineer, pirated software are not able to get legal support and it may work unexpectedly. So, I am thinking about changing job. Am I thinking too much? should I wait until I have more credit from the boss and try to change his policy? So far, the boss does not take any words from me. Any opinions are welcome. Thank you

    Read the article

  • How do I prevent Ubuntu gradually slowing down to a stop?

    - by user29165
    I really do not know what is causing the problem of my desktop environment (D.E.) gradually slowing down. It seems to happen in Gnome 3.2, LXDE, XFCE, and KDE. I am running Ubuntu 11.10 and this problem started happening from a fresh install. I have noticed that if I restart Gnome or otherwise re-log-in with any of the D.E.s the speed resets to normal. I don't seem to have and memory leaks that account for it and there are not any processes that are using excessive CPU cycles. Due to the symptoms I am guessing that there is an underlying package that interacts with all D.E.s that is the source of the problem. Just to clarify, in extreme situations, if I let the slowdown continue without a restart of the D.E. my system will finally get to a point where everything, even my clock, freezes. The time over which Gnome slows down (noticeably after 17 hrs) is much less than LXDE, or XFCE but eventually they freeze as well. I didn't mention Unity since I haven't used it enough to verify the problem, but I don't see why it should be different. Just in case this is relevant, I Suspend my computer rather than turn it off. I do this because of the greatly reduced load time and the 17 hrs I stated above is actual up-time, not time where the D.E. is actually active. However, the slow down does not seem to be affected by how long the computer has been in suspend mode. I know that it is possible that this problem is due to an interaction between two or more of the applications that I use and as such it may not be able to be duplicated by others. In the end I am just wondering (a) are other people experiencing this issue and (b) does anyone have some advice on where to start looking for a solution if one is not already known. I am even open to the idea of switching to the beta release of 12.04 if anyone thinks it will solve the problem. Edit: I took a video of my latest freeze that you can watch at http://youtu.be/flKUqUzCmdE, sorry about the audio. Edit: Since that video I checked my memory for errors and tried installing the proprietary video drivers. No memory errors were found and the proprietary video drivers made the display unusable so I had to uninstall them. Any thoughts?

    Read the article

  • 12.04 Booting into Terminal

    - by user170796
    To preface this, I would like to say that I am completely new to Ubuntu and have essentially zero programming experience/experience working with command line and terminal. I installed Ubuntu because I would like to get into programming. If you could provide me with the simplest instructions possible, I would be grateful. I have a Lenovo Ideapad Y500 (Intel i7, NVidia GT 750m, 1TB HDD, 16GB SSD cache, 8GB RAM) with Windows 8 on it. Using a Live CD, I installed Ubuntu 12.04 onto a 75 GB partition. During the installation, I kept all default settings except for one thing; I decided to encrypt my home folder, and so checked the corresponding box. The installation completed, and I restarted. Once I restarted, I saw the options "Ubuntu, with Linux 3.2.0-23-generic" "Ubuntu, with Linux 3.2.0-23-generic (recovery mode)" "Memory test (memtest86+)" "Memory test (memtest86+, serial console 115200)" "Windows Recovery Environment (loader) (on /dev/sdb3)" "Windows 8 (loader) (on /dev/sdb5)" "System Setup" I chose the first option, and was directed to a screen with the Ubuntu logo and the row of five dots below that change from orange to white. Then, I was brought to a full screen terminal that prompted me to login, which I did. I saw no option to boot into GUI at all, and am lost. I've been searching around and have tried the "startx" command to no avail. Should the command have some sort of context or something? I've also tried selecting the recovery mode option from the boot manager. I've tried the resume option from the following menu, which eventually just shuts down the computer after displaying a lot of scrolling text that's too fast for me to read. I've also tried the failsafex mode from the recovery mode menu, which only brings up a terminal box at the bottom of the window that covers the entire bottom part of the screen. Commands won't work in this window. When I try to access Windows 8, I get a message saying that the EFI file path was not specified or something along those lines. I had to enable Secure Boot in order to access Windows 8 (I had disabled it to be able to boot from the Live CD), which is functioning normally. I am at a complete loss for what to do. Any help will be extremely appreciated. EDIT: Bonus question! If you could figure out a way for me to boot to Windows 8 without having to enable Secure Boot, it would save me a lot of trouble. I can deal with switching every time, but I'd rather not have to.

    Read the article

  • HD Tune warning for "Reallocated Event Count" with a new/unused drive. How serious is that?

    - by Developer Art
    I've just looked at the health status of my old 2,5 inch 500 Gb Fujitsu drive with a popular "HD Tune" utility. It shows a warning for the "Reallocated Event Count" property. How serious is that? The thing is that the drive is practically new. I pulled it out of a new laptop over a year ago and never used it since. Right now it only has 53 "Power On" hours which sounds about right since I only had it running a few evenings overnight before switching it for something more performant. Does this warning indicate that the drive is likely to fail some time in the future? I'm somewhat perplexed since the drive is effectively unused. What is more, I have arranged with somebody to buy off this drive since I don't really need. It is 12,5 mm thick (with 3 plates) meaning it doesn't fit into an external enclosure which makes it quite useless to me. Can I give away the drive without having it on my conscience or better cancel the deal? In other words, can the drive be used safely for years to come or better throw it away? I'm running a sector test now to see if there are any real problems. Will post the results as soon as they're available.

    Read the article

  • Strange ZFS hidden filesystem problem

    - by RandomInsano
    Half of my ZFS filesystems are hidden in ZFS-fuse. Here's my story: So, I love ZFS. I used it for about six months on FreeBSD, but due to it crashing the kernel during heavy inter-filesystem IO load, I tried switching to Solaris 5.10. That was good, but when I attempted to do an import of my Version 13 pool into its Version 4 version of ZFS, there were some heafty problems. It may have tried to correct the filesystem definitions, I don't know. Since that version wasn't compatible with my pool, I've now switched to Ubuntu Server 10.4. That version more than supports that of my pool, but I can only see half of my filesystems. The filesystems I can see are the same as those Solaris could see. Now, despite those filesystems not being preset in a 'zfs list' command, I can still set properties on them and I can even still mount them and read and write files, but they just plain don't show up in 'zfs list'. I've mounted the major ones, but I'm not sure what other filesystems there are anymore (I have about eight that I can't see). Anyone have any idea what the heck is going on? I think I might try booting back into FreeBSD 8 (I still have the main boot drive laying around for that) and see if at least it is able to view the filesystems. I've also done a scrub while in Linux, and it found no errors with any of the data. Oddly, DMA read errors which caused problems on FreeBSD ZFS are reported by Linux, but ZFS-fuse doesn't find an error. That's a topic for another post however.

    Read the article

  • Amazon Ec2: Problem In Setting up FTP Server

    - by Muntasir
    after setting up My vsFtp Server ON Ec2 i am facing problem , my client is Filezilla and i am getting this error Response: 230 Login successful. Command: OPTS UTF8 ON Response: 200 Always in UTF8 mode. Status: Connected Status: Retrieving directory listing... Command: PWD Response: 257 "/" Command: TYPE I Response: 200 Switching to Binary mode. Command: PASV Response: 500 OOPS: invalid pasv_address Command: PORT 10,130,8,44,240,50 Response: 500 OOPS: priv_sock_get_cmd Error: Failed to retrieve directory listing Error: Connection closed by server this is the current setting in my vsftpd.conf #nopriv_user=ftpsecure #async_abor_enable=YES # ASCII mangling is a horrible feature of the protocol. #ascii_upload_enable=YES #ascii_download_enable=YES # You may specify a file of disallowed anonymous e-mail addresses. Apparently # useful for combatting certain DoS attacks. #deny_email_enable=YES # (default follows) #banned_email_file=/etc/vsftpd/banned_emails # chroot_local_user=YES #chroot_list_enable=YES # (default follows) #chroot_list_file=/etc/vsftpd/chroot_list GNU nano 2.0.6 File: /etc/vsftpd/vsftpd.conf # #ls_recurse_enable=YES # # When "listen" directive is enabled, vsftpd runs in standalone mode and # listens on IPv4 sockets. This directive cannot be used in conjunction # with the listen_ipv6 directive. listen=YES # # This directive enables listening on IPv6 sockets. To listen on IPv4 and IPv6 # sockets, you must run two copies of vsftpd with two configuration files. # Make sure, that one of the listen options is commented !! #listen_ipv6=YES pam_service_name=vsftpd userlist_enable=YES tcp_wrappers=YES pasv_enable=YES pasv_min_port=2345 pasv_max_port=2355 listen_port=1024 pasv_address=ec2-xxxxxxx.compute-1.amazonaws.com pasv_promiscuous=YES Note: i have already open those port in security group i mean listen port, min max if someone shows me how to fix this i will be very greatful thanks

    Read the article

< Previous Page | 231 232 233 234 235 236 237 238 239 240 241 242  | Next Page >