Search Results

Search found 64639 results on 2586 pages for 'work stealing'.

Page 313/2586 | < Previous Page | 309 310 311 312 313 314 315 316 317 318 319 320  | Next Page >

  • Moodle Inconsistencies, Flowplayer, or Server?

    - by dglickler
    We are trying to decide if an issue we're having with Moodle and our videos is a flowplayer issue, a video issue, or a network issue. Any input is welcome. We've had videos in our Moodle (version 1.9, we're working on an upgrade on a different server) up for weeks, or even months. After that time, some of them have suddenly just stopped working. When they don't work, they just don't load. The videos work when we first upload them. With flowplayer, we don't get errors, just a blank screen. We have re-uploaded our videos several times when this has happened, but we really would like to know what's causing it so that we can prevent it. There are also no keyframe issues with the video. We are currently trying to find answers through various searches, but have not had any luck. I will continue to post more info as I come across it...but if there's anyone who knows or has ideas, it's welcome.

    Read the article

  • Visual Studio 2010 Service Pack 1 Beta Released!

    - by Jim Duffy
    Just thought I’d pass on the word that the Visual Studio 2010 Service Pack 1 Beta is now available to download. VS2010 SP1 Beta ships with a go live license which means you can start using it for production work though I’m not sure I’m going to be that brave until I check it out a bit first. Jason Zanders has a blog post outlining the new features/fixes included in the beta. Here are a couple BREAKING news items you’ll want to TakeNote of… VS2010 SP1 Beta BREAKS ASP.NET MVC 3 RC Razor IntelliSense. A new ASP.NET MVC 3 RC2 installer will be released very soon that will allow you to upgrade in-place. VS2010 SP1 Beta BREAKS the Visual Studio Async CTP. A work around is being worked on but for now if you’re working with the Async CTP then stick with VS2010 RTM. Have a day.

    Read the article

  • How can I copy/paste files via RDP in Kubuntu?

    - by Dai
    I recently installed the latest Kubuntu (x64) on my work machine as I am trying to migrate away from Windows. Unfortunately I use RDP very frequently to connect to customer's servers and need to be able to copy files across. I have tried the following packages with no luck: remmina rdesktop xfreerdp My latest attempt to solve this involved connecting one of my folders to the remote server, here is the command I used to launch rdesktop: rdesktop -5 -K -r disk:home=/home/dai -r clipboard:CLIPBOARD -r sound:off -x l -P 192.168.0.2 -u "administrator" -p pass The servers are not all running the same version of Windows, the one I've been trying so far is running Server 2003 R2. Customer servers range from Server 2000 to Server 2008. I've been Googling this like mad but all the solutions I find seem to fail, maybe because almost all the help out there assumes that I'm running Gnome. Sorry if this is a stupid question. Thanks in advance for your help. Edit: Copying and pasting text seems to work just fine, but that's not what I need.

    Read the article

  • Ubuntu won't suspend automatically any more

    - by Sparhawk
    In the last month or so, Ubuntu (12.04) has stopped sleeping automatically. I've gone to System Settings Power, and verified (and toggled) "suspend on inactive for" to 5 minutes (for both battery and "when plugged in"), but the system stays awake. I've also tried used code similar to $ gsettings set org.gnome.settings-daemon.plugins.power sleep-inactive-ac-timeout 300 $ gsettings set org.gnome.settings-daemon.plugins.power sleep-inactive-battery-timeout 300 to set the timeout values. I've also verified these in dconf Editor. Previously, I could set this quite low to make my computer sleep quickly, but now it no longer works either. I'm not sure if this is relevant, but under old versions of Ubuntu, if I wanted my computer to never suspend (via the CLI), I would also have to set $ gsettings set org.gnome.settings-daemon.plugins.power sleep-inactive-ac false At some point, this seemed to have been depreciated (and also gave me the error "No such key 'sleep-inactive-ac'"). I found that it it was enough to set sleep-inactive-ac-timeout to 0. This worked for a while, but at some point auto-suspend stopped working as stated above. Oddly enough, the sleep-inactive-ac key is still present when I look via dconf editor. However, when I click it, it says "no schema", and the summary, etc. fields are blank. To test if the dconf power plugin was working, I tried playing around with other settings in the schema. Idle-dim-time and idle-dim-ac work as expected . However, setting sleep-display-ac to 5 seconds has no effect. I'm also not sure if this is relevant, but I've uninstalled gnome-screensaver, and installed xscreensaver. I have tried killing xscreensaver and re-installing gnome-screensaver, but this did not help. I've also had some trouble with DPMS. I'm not sure if this is related, but I'll put the information here, just in case. Using xscreensaver, I set Power Management to enabled, with standby and suspend timeouts to 10 minutes. I've verified these settings in ~/.xscreensaver and xset q. However, the screen blanks after about 30 seconds. If I turn off DPMS (either via xscreensaver GUI or modifying ~/.xscreensaver), it won't blank at all, so I know that DPMS is partially reading the xscreensaver settings. -- edit I've attempted more troubleshooting, by creating a new user account, then logging out of the main account and into the new account. I've tried modifying the timeouts via dconf, but get the same results as above (i.e. it doesn't work, nor does sleep-display-ac, but idle-dim-time and idle-dim-ac work). Also, the depreciated sleep-display-ac key is not visible, so I think that this is probably unrelated. -- edit I've since moved to gnome-shell instead of unity, and still have this problem, so I guess that it's something to do with gnome-power-manager.

    Read the article

  • A record DNS, nameserver help

    - by Josip Gòdly Zirdum
    I just installed kloxo on my vps and I want to link my domain to that server...which it sort of already is...I made it connect to it via an A record. That works, like the IP is pointing to my server but how do I make a website using it. I tried adding the domain but this doesn't work ...I feel i'm not explaining this well um, on my server it asked me to create DNS template so I did I created the nameservers ns1.mydomain.com, ns2.mydomain.com Then I added the domain to the panel mydomain.com I go to the folder it creates to it, but no matter the file there it wont work...any ideas? Is there a way to possibly just not even add a domain to kloxo and just kind of treat the ip of the server as the domain? Since the A record points there anyway? I don't intend to host another website on the server anyway...

    Read the article

  • TTS on App Engine

    - by yati sagade
    I have written a small front-end to the Festival TTS system using Python/Django. I wish to deploy it on the Google App Engine cloud. A few questions: My application uses the Festival app 'text2wave'. Will is work on the cloud? I have used Python primitives like subprocess.call() to invoke the aforementioned program. Will that work? If your answer to any or both of (1) and (2) is no, is there a free api on the web that I can use (from the appengine)? I read somewhere about placing calls from Phono to a Voxeo backend, but I'm not sure what that means. I am aware of the Google Translate extension that allows translation using an HTTP GET (REST) request, but here the text is limited to 100 chars. Bad. Plus, they may take it down any point of time.

    Read the article

  • Java Plugin - Firefox

    - by Tomassino
    Having trouble getting Java to work with Firefox (22), I have followed the advice in this question and on the official Java site, but nothing seems to work. I have the latest Java (1.7.0_25) in /opt/java and have set a symlink in /usr/libs/mozilla/plugins for the libnpjp2.so file. I can see the file in the terminal and Java runs fine. However Firefox shows nothing in about:plugins. I have also run export JAVA_HOME="/opt/java/jre1.7.0_05/bin/java" to be on the safe side. I know there are multiple plugin directories such as /usr/lib/firefox/plugins and /usr/lib/firefox-addons/plugins, but all my current plugins show they are located in /usr/lib/mozilla/plugins when viewing the about:plugins page. A bit stuck on where to go next?

    Read the article

  • Problems with HP laptop

    - by Jan
    So the problem is...every time I reboot or shutdown laptop and when the laptop is booting again I get screen (before loading OS) that HP discovered overheating and system went into hibernate. But the point is that laptop is not overheating nor going into hibernate by itself. Also becouse of the hybrid graphic cards I cannot install additional drivers. Desktop resolution and all works perfectly but I cannot use unity 3D also openGL doesn't work as it should (with cairo docks) As i've read some posts, people say that switcheero doesn't work on 12.04 so I haven't tried it.

    Read the article

  • why not use unmanaged safe code in c#

    - by user613326
    There is an option in c# to execute code unchecked. It's generally not advised to do so, as managed code is much safer and it overcomes a lot of problems. However I am wondering, if you're sure your code won't cause errors, and you know how to handle memory then why (if you like fast code) follow the general advice? I am wondering this since I wrote a program for a video camera, which required some extremely fast bitmap manipulation. I made some fast graphical algorithms myself, and they work excellent on the bitmaps using unmanaged code. Now I wonder in general, if you're sure you don't have memory leaks, or risks of crashes, why not use unmanaged code more often ? PS my background: I kinda rolled into this programming world and I work alone (I do so for a few years) and so I hope this software design question isn't that strange. I don't really have other people out there like a teacher to ask such things.

    Read the article

  • JSF 2.x's renaissance

    - by alexismp
    JAXenter's Chris Mayer posted a column last week about the "JavaServer Faces enjoying Java EE renaissance under Oracle's stewardship". This piece discusses the adoption and increased ecosystem (component libraries, tools, runtimes, ...) since the release of JSF 2.0 as well as ongoing work on 2.2. As Cameron Purdy comments, Oracle as a company certainly has vested interest in JSF and will continue to invest in the technology. Specifically for JSF 2.2, and as this other article points out, a lot of the work has to do with alignment with HTML5 (see this example) and making the technology even more mobile-friendly (along with the main Java EE 7 "PaaS" theme of course). Chris' article concludes with "JSF appears to be the answer for highly-interactive Java-centric organisations who were hesitant of making a huge leap to JavaScript, and wanted the best RIA applications at their disposal".

    Read the article

  • Organizing Git repositories with common nested sub-modules

    - by André Caron
    I'm a big fan of Git sub-modules. I like to be able to track a dependency along with its version, so that you can roll-back to a previous version of your project and have the corresponding version of the dependency to build safely and cleanly. Moreover, it's easier to release our libraries as open source projects as the history for libraries is separate from that of the applications that depend on them (and which are not going to be open sourced). I'm setting up workflow for multiple projects at work, and I was wondering how it would be if we took this approach a bit of an extreme instead of having a single monolithic project. I quickly realized there is a potential can of worms in really using sub-modules. Supposing a pair of applications: studio and player, and dependent libraries core, graph and network, where dependencies are as follows: core is standalone graph depends on core (sub-module at ./libs/core) network depdends on core (sub-module at ./libs/core) studio depends on graph and network (sub-modules at ./libs/graph and ./libs/network) player depends on graph and network (sub-modules at ./libs/graph and ./libs/network) Suppose that we're using CMake and that each of these projects has unit tests and all the works. Each project (including studio and player) must be able to be compiled standalone to perform code metrics, unit testing, etc. The thing is, a recursive git submodule fetch, then you get the following directory structure: studio/ studio/libs/ (sub-module depth: 1) studio/libs/graph/ studio/libs/graph/libs/ (sub-module depth: 2) studio/libs/graph/libs/core/ studio/libs/network/ studio/libs/network/libs/ (sub-module depth: 2) studio/libs/network/libs/core/ Notice that core is cloned twice in the studio project. Aside from this wasting disk space, I have a build system problem because I'm building core twice and I potentially get two different versions of core. Question How do I organize sub-modules so that I get the versioned dependency and standalone build without getting multiple copies of common nested sub-modules? Possible solution If the the library dependency is somewhat of a suggestion (i.e. in a "known to work with version X" or "only version X is officially supported" fashion) and potential dependent applications or libraries are responsible for building with whatever version they like, then I could imagine the following scenario: Have the build system for graph and network tell them where to find core (e.g. via a compiler include path). Define two build targets, "standalone" and "dependency", where "standalone" is based on "dependency" and adds the include path to point to the local core sub-module. Introduce an extra dependency: studio on core. Then, studio builds core, sets the include path to its own copy of the core sub-module, then builds graph and network in "dependency" mode. The resulting folder structure looks like: studio/ studio/libs/ (sub-module depth: 1) studio/libs/core/ studio/libs/graph/ studio/libs/graph/libs/ (empty folder, sub-modules not fetched) studio/libs/network/ studio/libs/network/libs/ (empty folder, sub-modules not fetched) However, this requires some build system magic (I'm pretty confident this can be done with CMake) and a bit of manual work on the part of version updates (updating graph might also require updating core and network to get a compatible version of core in all projects). Any thoughts on this?

    Read the article

  • installing my Graphic Card on ubuntu12.04

    - by lamouchi amine
    I have a HP Pavillion G6 series 1225, i5 laptop with Radeon HD 6470M switchable VGA. i installed Ubuntu 12.04 LTS but the VGA drivers don't work properly. I want to install drivers into the Ubuntu. But when I do it arrived error message like this: sorry, installation of this driver failed. Please have a look at the log file for details: /var/log/jockey.log I found a solution in a link: http://ubuntuforums.org/showthread.php?t=1930450 It works for a few steps and then in the installation of the package of the AMD driver, a message pops up 'fatal error' and it redirects me to Ask Ubuntu to find a solution, please help me, I need to make it work.

    Read the article

  • How to port animation from one skeleton to another?

    - by shawn
    While I need to do this in a Blender3D modeler script, the math should be similar for other modelers or realtime engines. Blender3D specific terminology: Armature = skeleton EditBone = rest pose bone (stores the rest pose matrix) PoseBone = can store a different pose (animation matrix) for each frame of your animation I need to share animations (Blender Actions) between Armatures which have EditBones with same names and which have the same positions, but can have different (rest pose) angles and scales. Plus the Armatures might have different bone hierarchy (bone parenting/ no bone parenting). Why I need this: I've made an importer/exporter for a 3d format for a game. The format doesn't store enough info to connect/parent the bones, which makes posing/animating character models in a 3d modeller nearly impossible (original model files for the 3d modeler don't exist, this is for modding). As there are only 2 character skeleton types in the game, I decided to optionally allow to generate the bone from a hardcoded data in the model importer and undo that in the exporter. This allows to easily pose the model for checking weights, easily create weights, makes it easier for Blender to generate automatic weights and of course makes animating possible. This worked perfectly: the importer optionally generated the Armature itself and the exporter removed those changes, so the exported model works with existing animations in the game. But now I'm writing an importer and exporter for the game's animation format and here come the problems of: Trying to make original animations work in Blender with my "custom" (modified) Armature Trying to make animations created by using the "custom" (modified) Armature work with the original models in the game (and Blender). Constraints or bone snapping inside Blender won't work as they don't care that the bones have different angles in the rest pose, they will still face the same direction. It seems I just need to get the "difference" between the EditBone matrices of all EditBones for the two Armatures somehow and apply that difference to PoseBone matrices of all PoseBones, for all frames of my animation. I need to know how to get that difference and how to apply it. BTW, PoseBone matrices are relative to rest pose, they are by default [1.000000, 0.000000, 0.000000, 0.000000](matrix [row 0]) [0.000000, 1.000000, 0.000000, 0.000000](matrix [row 1]) [0.000000, 0.000000, 1.000000, 0.000000](matrix [row 2]) [0.000000, 0.000000, 0.000000, 1.000000](matrix [row 3]) So the question is: How to get the difference between two bone (EditBone) matrices to apply that difference to the animation matrices (PoseBone matrices)? Please be easy on the matrix math.

    Read the article

  • Vdpau performance in Precise with Unity 3d

    - by bowser
    vdpau seems to be broken in Precise under Unity 3d. CPU usage ranges around 50-70% for 1080p movies while same movies utilizes around 5-10% in Natty with vdpau enabled (under Unity3d) The card is Nvidia G105m. It doesn't seem to be a Nvidia driver problem because in gnome-shell everything works as expected and I have tried different versions of Nvidia drivers (295.20, 295.33, 295.40 and the latest 302.XX from xorg-edgers) The results are all the same, works in Gnome Shell but not in Unity 3d. Disabling syn to vbank works if movie is not in full screen mode, but it doesn't work for full screen. I have searched around and haven't found much info. I am wondering if others are experiencing the same problem and if there are some known work around that I have missed. Unity 3d is otherwise very nice in Precise, but this is a show stopping issue for me (literally). Thanks. I have filed a bug here https://bugs.launchpad.net/unity/+bug/993397

    Read the article

  • Does it help to be core programmer of a product (product meant for social good ) for getting into Ph. D. in top university in USA say top 20?

    - by Maddy.Shik
    Hey i am working upon a product as core developer which will be launched in USA market in few months if successful. Can this factor improve my chance to get Ph.D. in good university(say top 20 in US). Normally good universities like CMU, standford, MIT, Cornell are more interested in student's profile like research work, under graduate school etc. I am not passed out from very good university its ranked in top 20 of India only. Neither did i do research work till now. But being one of founding member of company and developing product for same, i want to know if this factor can help and to what extent. For university with ranking lower than 20 what matters most is GRE General score and GPA but i guess top university must be appreciating a person's real efforts.

    Read the article

  • How can I host a website on a dynamically-assigned IP address?

    - by nick
    I recently upgraded my internet to the point that it is much faster and more reliable than my current webhost. I would like to move my current domain to be hosted at home, but my IP address is dynamic. As far as I know, I only get a new IP when I restart my modem and or router (which is almost never) or when cable one (my ISP) pushes out a firmware update (rarely). There are a few ways I can see doing this: Convince my ISP to give me a static IP Assign my router my current IP to force a static IP (which might work?) Set my DNS record to my current IP address and update it on the rare occasions that it changes. Obviously I'm hoping that the first one works, but I don't want to pay a lot of extra money (if that's what it takes) to get a static IP address. Which of these options will work most reliably?

    Read the article

  • Firefox, Chrome, and Flash on Ubuntu

    - by Zimmer
    Ok I have recently run into some problems and was hoping you guys could help; 1) On Chrome sometimes when I play a video (even on Youtube) the audio won't work (yet other apps audio will work) but after pressing the play button (pausing and unpausing the video) it finally works but if I pause the video and click play it goes back to not working until I re-do that process. 2) When I go to play videos in firefox or go to grooveshark it says I don't have flash; but I do and when I go to install flash it says I have the LAST version for linux but flash works on Chrome fine (well except the audio problem above which annoys me to no end!)

    Read the article

  • How Mature is Your Database Change Management Process?

    - by Ben Rees
    .dbd-banner p{ font-size:0.75em; padding:0 0 10px; margin:0 } .dbd-banner p span{ color:#675C6D; } .dbd-banner p:last-child{ padding:0; } @media ALL and (max-width:640px){ .dbd-banner{ background:#f0f0f0; padding:5px; color:#333; margin-top: 5px; } } -- Database Delivery Patterns & Practices Further Reading Organization and team processes How do you get your database schema changes live, on to your production system? As your team of developers and DBAs are working on the changes to the database to support your business-critical applications, how do these updates wend their way through from dev environments, possibly to QA, hopefully through pre-production and eventually to production in a controlled, reliable and repeatable way? In this article, I describe a model we use to try and understand the different stages that customers go through as their database change management processes mature, from the very basic and manual, through to advanced continuous delivery practices. I also provide a simple chart that will help you determine “How mature is our database change management process?” This process of managing changes to the database – which all of us who have worked in application/database development have had to deal with in one form or another – is sometimes known as Database Change Management (even if we’ve never used the term ourselves). And it’s a difficult process, often painfully so. Some developers take the approach of “I’ve no idea how my changes get live – I just write the stored procedures and add columns to the tables. It’s someone else’s problem to get this stuff live. I think we’ve got a DBA somewhere who deals with it – I don’t know, I’ve never met him/her”. I know I used to work that way. I worked that way because I assumed that making the updates to production was a trivial task – how hard can it be? Pause the application for half an hour in the middle of the night, copy over the changes to the app and the database, and switch it back on again? Voila! But somehow it never seemed that easy. And it certainly was never that easy for database changes. Why? Because you can’t just overwrite the old database with the new version. Databases have a state – more specifically 4Tb of critical data built up over the last 12 years of running your business, and if your quick hotfix happened to accidentally delete that 4Tb of data, then you’re “Looking for a new role” pretty quickly after the failed release. There are a lot of other reasons why a managed database change management process is important for organisations, besides job security, not least: Frequency of releases. Many business managers are feeling the pressure to get functionality out to their users sooner, quicker and more reliably. The new book (which I highly recommend) Lean Enterprise by Jez Humble, Barry O’Reilly and Joanne Molesky provides a great discussion on how many enterprises are having to move towards a leaner, more frequent release cycle to maintain their competitive advantage. It’s no longer acceptable to release once per year, leaving your customers waiting all year for changes they desperately need (and expect) Auditing and compliance. SOX, HIPAA and other compliance frameworks have demanded that companies implement proper processes for managing changes to their databases, whether managing schema changes, making sure that the data itself is being looked after correctly or other mechanisms that provide an audit trail of changes. We’ve found, at Red Gate that we have a very wide range of customers using every possible form of database change management imaginable. Everything from “Nothing – I just fix the schema on production from my laptop when things go wrong, and write it down in my notebook” to “A full Continuous Delivery process – any change made by a dev gets checked in and recorded, fully tested (including performance tests) before a (tested) release is made available to our Release Management system, ready for live deployment!”. And everything in between of course. Because of the vast number of customers using so many different approaches we found ourselves struggling to keep on top of what everyone was doing – struggling to identify patterns in customers’ behavior. This is useful for us, because we want to try and fit the products we have to different needs – different products are relevant to different customers and we waste everyone’s time (most notably, our customers’) if we’re suggesting products that aren’t appropriate for them. If someone visited a sports store, looking to embark on a new fitness program, and the store assistant suggested the latest $10,000 multi-gym, complete with multiple weights mechanisms, dumb-bells, pull-up bars and so on, then he’s likely to lose that customer. All he needed was a pair of running shoes! To solve this issue – in an attempt to simplify how we understand our customers and our offerings – we built a model. This is a an attempt at trying to classify our customers in to some sort of model or “Customer Maturity Framework” as we rather grandly term it, which somehow simplifies our understanding of what our customers are doing. The great statistician, George Box (amongst other things, the “Box” in the Box-Jenkins time series model) gave us the famous quote: “Essentially all models are wrong, but some are useful” We’ve taken this quote to heart – we know it’s a gross over-simplification of the real world of how users work with complex legacy and new database developments. Almost nobody precisely fits in to one of our categories. But we hope it’s useful and interesting. There are actually a number of similar models that exist for more general application delivery. We’ve found these from ThoughtWorks/Forrester, from InfoQ and others, and initially we tried just taking these models and replacing the word “application” for “database”. However, we hit a problem. From talking to our customers we know that users are far less further down the road of mature database change management than they are for application development. As a simple example, no application developer, who wants to keep his/her job would develop an application for an organisation without source controlling that code. Sure, he/she might not be using an advanced Gitflow branching methodology but they’ll certainly be making sure their code gets managed in a repo somewhere with all the benefits of history, auditing and so on. But this certainly isn’t the case (yet) for the database – a very large segment of the people we speak to have no source control set up for their databases whatsoever, even at the most basic level (for example, keeping change scripts in a source control system somewhere). By the way, if this is you, Red Gate has a great whitepaper here, on the barriers people face getting a source control process implemented at their organisations. This difference in maturity is the same as you move in to areas such as continuous integration (common amongst app developers, relatively rare for database developers) and automated release management (growing amongst app developers, very rare for the database). So, when we created the model we started from scratch and biased the levels of maturity towards what we actually see amongst our customers. But, what are these stages? And what level are you? The table below describes our definitions for four levels of maturity – Baseline, Beginner, Intermediate and Advanced. As I say, this is a model – you won’t fit any of these categories perfectly, but hopefully one will ring true more than others. We’ve also created a PDF with a flow chart to help you find which of these groups most closely matches your team:  Download the Database Delivery Maturity Framework PDF here   Level D1 – Baseline Work directly on live databases Sometimes work directly in production Generate manual scripts for releases. Sometimes use a product like SQL Compare or similar to do this Any tests that we might have are run manually Level D2 – Beginner Have some ad-hoc DB version control such as manually adding upgrade scripts to a version control system Attempt is made to keep production in sync with development environments There is some documentation and planning of manual deployments Some basic automated DB testing in process Level D3 – Intermediate The database is fully version-controlled with a product like Red Gate SQL Source Control or SSDT Database environments are managed Production environment schema is reproducible from the source control system There are some automated tests Have looked at using migration scripts for difficult database refactoring cases Level D4 – Advanced Using continuous integration for database changes Build, testing and deployment of DB changes carried out through a proper database release process Fully automated tests Production system is monitored for fast feedback to developers   Does this model reflect your team at all? Where are you on this journey? We’d be very interested in knowing how you get on. We’re doing a lot of work at the moment, at Red Gate, trying to help people progress through these stages. For example, if you’re currently not source controlling your database, then this is a natural next step. If you are already source controlling your database, what about the next stage – continuous integration and automated release management? To help understand these issues, there’s a summary of the Red Gate Database Delivery learning program on our site, alongside a Patterns and Practices library here on Simple-Talk and a Training Academy section on our documentation site to help you get up and running with the tools you need to progress. All feedback is welcome and it would be great to hear where you find yourself on this journey! This article is part of our database delivery patterns & practices series on Simple Talk. Find more articles for version control, automated testing, continuous integration & deployment.

    Read the article

  • Becoming a "maintenance developer"

    - by anon
    So I've kind of been getting angry about the current position I'm in, and I'd love to get other developers' input on this. I've been at my current place of employment for about 11 months now. When I began, I was working on all new features. I basically worked on an entire new web project for the first 5-6 months I was here. After this, I was moved to more of a service oriented role (which was still great, all new stuff for me), and I was in this role for about the past 5-6 months. Here's where the problem comes in. Basically, a couple of days ago I was made the support/maintenance guy. Now, we have an IT support team, so I'm not talking that kind of support, I'm talking more of a second level support guy (when the guys on the surface can't really get to the root of the issue), coupled with working on maintenance issues that have been lingering in the backlog for a while. To me, a developer with about 3 years of experience, this is kind of disheartening. With the type of work place this is, I wouldn't be surprised if these support issues take up most of my days, and I barely make it to working on maintenance issues. Also, most of these support issues aren't even related to code, they are more or less just knowing the system architecture, working with making sure services are running/getting started properly, handling/fixing bad data, etc. I'm a developer, so this part sucks. Also, even when I do have time to work maintenance, these are basically just bug fixes/improving bad code, so this sucks as well, however at least it's related to coding. Am I wrong for getting angry here? I don't want to really complain about it, but to be honest, I wasn't spoken to about this or anything, I was kind of just sent an e-mail letting me know I'm the guy for this type of thing, and that was that. The entire team took a few minutes to give me their "that sucks" talk, because they know how annoying it is to be on support for the type of work we do, so I know I'm not the only guy that knows it's not that great of an opportunity. I'm just kind of on the fence about how to move forward. Obviously I'm just going to continue working for the time being, no point making a bad impression on anybody, but I'd like to know how you guys would approach this situation, or how you think I should be feeling about it/how you guys would feel. Thanks guys.

    Read the article

  • Need help to install Ubuntu on Win7 Alienware M17xR3, installing not working

    - by Rodrigo
    Please I need help to install linux on my computer to start analyzing Next Generation Sequencing (molecular genetics) for my work. I've tried to install from all different ways and forms and it gave me different errors. First I will describe my system: Alienware M17x R3 Raid Drive 596GB (2x 298 GB) Windows7 primary partition C: The hardrive is divided in 3 primary partitions (can't do 4 don't know why) 1st: Fat16 no name capacity 40mbs almost not used (has files like command.com, dellbio.bin, dellrmk.bin and oobedone.flg) I dont know what this is for but is blocked and non system (Status:none). 2nd: Recovery NTFS 9GB (Status:system) and it seems to have boot files. 3rd: CD: Windos OS (status:boot). And I have 2 Logical drivers F: data for backup and games :D (300gb) G: For trying to install the Linux (Already tried with Fat32, Ext2, ext3 and ext4) Installing with Wubi on F: and G: Instalation looks good on windows ask to restart dos mode appears: try (HD0,0) non-ms:skip try (HD0,1) NTFS5: No Wibildr try (HD0,2) NTFS5: Error: "Prefix" is not set. Installing with Wubi on C: Installation looks good on windows ask to restart dos mode apears: Mount denied becouse NTFS volume is already exclusive opened, the volume may be already mounted or another software may use it which could be identified for example by the help of the 'FUSER' command Mount devdm-4 Could not find installation files /ubuntu/install custom-installation Run 'Chkdsk /r' on windows and restart...blablablalba... that should correct. done this didn't work. Tried installing with USB drive and after burning a CD with the Iso file download from the official website: I've tried the option to install along windows with a G: drive in FAT32, EXT2,3 and 4, I've tried to leave unallocated data, without the driver G at all... appears this everytime: http://i.stack.imgur.com/eo2Ie.png pres ok it ask me to: Not possible to install the boot loader at the location: specify new location: I've tried to select all the drivers non is accepted If I ask to continue without installing the boot loader he ask me to install later by myself which I'm trying to figure it out but I'm not really good to understand how ppl get to the programming part like a DOS to write down couse I'm a newb and I've dont work on dos or linux before... I've read that the boot loader has to be installed on the primary partition, but I'm not sure what I can delete from those 2 partitions that would let me get 1 extra... and it seems that the one that windows use is protected to write down so the software is not touching it... Please if someone have any idea to help me out... I've got a course tomorrow of how to use the software to analyze the data and I wanted to take my computer with me, so I've passed the last 3 days trying to search different ways to solve. My question is also post on the bugs.launchpad.net link here: lounchpad.net bug #882695

    Read the article

  • Establishing relationships with unsolicited recruiters

    - by Michael
    Several times each year, I receive unsolicited introductions from tech recruiters, usually via LinkedIn and usually local firms. I am not currently looking for a new job. Is it advisable to establish a relationship with one or more recruiters when I'm not interested in finding new work, so that they have my resume on file? Here's another way I approach the question: My plumber was first hired at the point of need: I had a plumbing problem, looked up a few of them, and evaluated and hired based on their demeanor and cost estimates. I established a relationship with a general attorney, on the other hand, well in advance of ever actually needing services so that if or when services are needed he already knows enough about me to begin work. Should I approach recruiters like I approached my plumber or my lawyer? A separate discussion, I suppose, is whether or not the type of recruiters who troll LinkedIn for clients are generally helpful or not. Edit: I have never worked with a recruiter before, and therefore have little idea what to expect.

    Read the article

  • Suggestions for getting an open source electron beam tracking code going

    - by Boaz
    I work in the field of accelerator physics and synchrotron radiation. High energy electrons circulating in large rings of magnets produce x-rays that are used for a variety of different kinds of science. Running and improving these facilities requires controlling and modelling the electron beam as it circulates in the ring. A code to model this basically requires trackers to follow the electrons through the elements (something called a symplectic integrator), and then the computation of different parameters associated with this motion. The problem with these codes is that every facility has there own. In principle the code is not so complex. And as a modelling project, one might think it has some general interest. Who doesn't want to be able to create a track in space out of magnets and watch the electrons circulate? There is a Matlab based code to do this called Accelerator Toolbox, but the creator of the code is no longer in the field. I put the code in Sourceforge under the name atcollab. The basic resource is in C- it is the set of symplectic integrators. These are available in the atcollab code here. It has been useful to put the code on Sourceforge in order to exchange code, but the community of users is quite small and most are too busy to put that much time into collaboration. So in terms of really improving the code, I don't think it has been so successful. Any piece of this picture could be recreated without that much difficulty, but overall it is a bit complex, and because each lab has their own installation with lots of add-on Matlab code, people find it hard to really work together and share code. Somehow I think we need to involve a wider community in our development, or just use some standard tools. But for that, I suppose it needs to be of some general interest. I think symplectic integrators may have some general interest. And the part about a plug-in architecture to build up the ring ought to fit other patterns. Or the other option is to just accept that this is not a problem of general interest, and work harder within our small community. Suggestions, or anecdotes of analogous experience would be appreciated.

    Read the article

  • Can I force window to open on top of other windows when opened by keyboard shortcut?

    - by Rasmus
    I use SpaceFM as my primary file manager on Ubuntu. I typically open folder directly by keyboard shortcuts, so, e.g. Ctrl+Super+W opens my Work folder. Specifically, I use execute the command spacefm -w /home/rasmus/Work/ by the above shortcut, with the -w ensuring that SpaceFM opens a new window. However, this new window is not always open on top of the last active window on the workspace. This is rather annoying, as it means I sometimes have to "dig" for the newly opened window. So, my question is: Is there something additional I can add to the executed command that will ensure that the fresh window is opened on top? Alternative solutions to the same effect are welcome.

    Read the article

< Previous Page | 309 310 311 312 313 314 315 316 317 318 319 320  | Next Page >