Search Results

Search found 3920 results on 157 pages for 'dave mess'.

Page 126/157 | < Previous Page | 122 123 124 125 126 127 128 129 130 131 132 133  | Next Page >

  • Need help fixing DPKG errors after update from 12.04 to 12.10

    - by James Wulfe
    So I was doing fine then i upgraded my system to 12.10 and now i cant get my system to update all of its packages properly. no matter what i do, what is happening here and how do i fix this. if i would have thought 12.10 would be this much of a hassle i would have never upgraded..... here is a sampling of the code that returns from "apt-get -f install" It should also be noted that it is just these 6 packages only. no other packages have given me this kind of trouble. well i should say as of now. It was just 5, but them i got an update for unity, and now unity-common is added to the trouble makers. which prevents me from further upgrading the actual unity package as this package is a dependancy. Preparing to replace usb-modeswitch-data 20120120-0ubuntu1 (using .../usb-modeswitch-data_20120815-1_all.deb) ... /var/lib/dpkg/info/usb-modeswitch-data.prerm: 4: /var/lib/dpkg/info/usb-modeswitch-data.prerm: dpkg-maintscript-helper: Input/output error dpkg: warning: subprocess old pre-removal script returned error exit status 2 dpkg: trying script from the new package instead ... /var/lib/dpkg/tmp.ci/prerm: 4: /var/lib/dpkg/tmp.ci/prerm: dpkg-maintscript-helper: Input/output error dpkg: error processing /var/cache/apt/archives/usb-modeswitch-data_20120815-1_all.deb (--unpack): subprocess new pre-removal script returned error exit status 2 /var/lib/dpkg/info/usb-modeswitch-data.postinst: 7: /var/lib/dpkg/info/usb-modeswitch-data.postinst: dpkg-maintscript-helper: Input/output error dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 2 Errors were encountered while processing: /var/cache/apt/archives/network-manager_0.9.6.0-0ubuntu7_i386.deb /var/cache/apt/archives/pcmciautils_018-8_i386.deb /var/cache/apt/archives/unity-common_6.10.0-0ubuntu2_all.deb /var/cache/apt/archives/whoopsie_0.2.7_i386.deb /var/cache/apt/archives/usb-modeswitch_1.2.3+repack0-1ubuntu3_i386.deb /var/cache/apt/archives/usb-modeswitch-data_20120815-1_all.deb E: Sub-process /usr/bin/dpkg returned an error code (1) I would also like to note i have cleaned apt cashe both through the terminal and manualy, i have tried installing them manually through dpkg from both the /var/cache/apt/archives/ location and from my own manually downloaded .deb files. i have tried using dpkg-reconfigure and i have used bleachbit to clean my system. I have also tested both my HDD and memory and found no significant errors to lead to the input/output errors. Quite frankly i am just out of options and have grown tired of trying to google a solution to this mess but still do not wish to pursue backing up settings and reinstalling the system. Any help would be appreciated. I am only interested in answers, please leave your feeling towards grammar, punctuation, and bias towards how a "post should look" at the door. If you dont have something to contribute towards solving my problem then you are just doing nothing but contributing to it. Thank you.

    Read the article

  • On the art of self-promotion

    - by Tony Davis
    I attended Brent Ozar's Building the Fastest SQL Servers session at Tech Ed last week, and found myself engulfed in a 'perfect storm' of excellent technical and presentational skills coupled with an astute awareness of the value of promoting one's work. I spend a lot of time at such events talking to developers and DBAs about the value of blogging and writing articles, and my impression is that some could benefit from a touch less modesty and a little more self-promotion. I sense a reticence in many would-be writers. Is what I have to say important enough? Haven't far more qualified and established commentators, MVPs and so on, already said it? While it's a good idea to pick reasonably fresh and interesting topics, it's more important not to let such fears lead to writer's block. In the eyes of any future employer, your published writing is an extension of your resume. They will not care that a certain MVP knows how to solve problem x, but they will be very interested to see that you have tackled that same problem, and solved it in your own way, and described the process in your own voice. In your current job, your writing is one of the ways you can express to your peers, and to the organization as a whole, the value of what you contribute. Many Developers and DBAs seem to rely on the idea that their work will speak for itself, and that their skill shines out from it. Unfortunately, this isn't always true. Many Development DBAs, for example, will be painfully aware of the massive effort involved in tuning and adding resilience to rapidly developed applications. However, others in the organization who are unaware of what's involved in getting an application that is 'done' ready for production may dismiss such efforts as fussiness or conservatism. At the dark end of the development cycle, chickens come home to roost, but their droppings tend to land on those trying to clear up the mess. My advice is this: next time you fix a bug or improve the resilience or performance of a database or application, make sure that you use team meetings, informal discussions and so on to ensure that people understand what the problem was and what you had to do to fix it. Use your blog to describe, generally, the process you adopted, the resources you used and the insights that came from your work. Encourage your colleagues to do the same. By spreading the art of self-promotion to everyone involved in an IT project, we get a better idea of the extent of the work and the value of the contribution of all the team members. As always, we'd love to hear what you think. This very week, Simple-talk launches its new blogging platform. If any of this has moved you to 'throw your hat into the ring', drop us a mail at [email protected]. Cheers, Tony.

    Read the article

  • On the art of self-promotion

    - by Tony Davis
    I attended Brent Ozar’s Building the Fastest SQL Servers session at Tech Ed last week, and found myself engulfed in a ‘perfect storm’ of excellent technical and presentational skills coupled with an astute awareness of the value of promoting one’s work. I spend a lot of time at such events talking to developers and DBAs about the value of blogging and writing articles, and my impression is that some could benefit from a touch less modesty and a little more self-promotion. I sense a reticence in many would-be writers. Is what I have to say important enough? Haven’t far more qualified and established commentators, MVPs and so on, already said it? While it’s a good idea to pick reasonably fresh and interesting topics, it’s more important not to let such fears lead to writer’s block. In the eyes of any future employer, your published writing is an extension of your resume. They will not care that a certain MVP knows how to solve problem x, but they will be very interested to see that you have tackled that same problem, and solved it in your own way, and described the process in your own voice. In your current job, your writing is one of the ways you can express to your peers, and to the organization as a whole, the value of what you contribute. Many Developers and DBAs seem to rely on the idea that their work will speak for itself, and that their skill shines out from it. Unfortunately, this isn’t always true. Many Development DBAs, for example, will be painfully aware of the massive effort involved in tuning and adding resilience to rapidly developed applications. However, others in the organization who are unaware of what’s involved in getting an application that is ‘done’ ready for production may dismiss such efforts as fussiness or conservatism. At the dark end of the development cycle, chickens come home to roost, but their droppings tend to land on those trying to clear up the mess. My advice is this: next time you fix a bug or improve the resilience or performance of a database or application, make sure that you use team meetings, informal discussions and so on to ensure that people understand what the problem was and what you had to do to fix it. Use your blog to describe, generally, the process you adopted, the resources you used and the insights that came from your work. Encourage your colleagues to do the same. By spreading the art of self-promotion to everyone involved in an IT project, we get a better idea of the extent of the work and the value of the contribution of all the team members. As always, we’d love to hear what you think. This very week, Simple-talk launches its new blogging platform. If any of this has moved you to ‘throw your hat into the ring’, drop us a mail at [email protected]. Cheers, Tony.

    Read the article

  • Deferred Shading - Toolkit

    - by AliveDevil
    I recently managed to get some lights rendered in a scene by using a buffer and a for-loop. The problem with this method is the performance drop if more lights are used. I tried to convert Deferred Rendering in XNA4.0 | ROY-T.NL but it is not working, because I am not using any models. I know I have to render color, normals and lights seperate but I don't know how I could get it working. For understanding my structure better I'm using a world-class which holds some chunks. These chunks are loading all vertices from their items. These items have a property which returns the vertices. The item is returning VertexPositionNormalTexture[]. The chunk loads these Vertices and combines them to one large array of VertexPositionNormalTexture via someList.AsParallel().SelectMany(m => m).ToArray()). m is a VertexPositionNormalTexture. someList is List<VertexPositionNormalTexture>. I got my own shader to draw these vertices how I want them to be drawn. The first thing I would try is setting up two RenderTarget2D for rendering the color and normal part. With two different shaders. Than I would have to render the lights and there's the problem: I don't know how. I set up a structure to simplify working with lights but it didn't really help. public struct Light { public Vector3 Position; public Color4 Color; public float Range; public float Intensity; public Light( Vector3 position, Color color, float range, float intensity ) : this() { this.Position = position; this.Color = color; this.Range = range; this.Intensity = intensity; } public float[] Definition { get { return new[] { Position.X, Position.Y, Position.Z, Color.Red, Color.Green, Color.Blue, Intensity, Range }; } } } The next part is equally different because I don't know how to combine the colorMap, normalMap and textureMap to one finalMap. Some information to the system: I'm using SharpDX (Nightly from some months ago) and the SharpDX.Toolkit (I don't want to mess up with Direct3DDevice and similar things). Can someone help me with this problem? If things are missing or I provided insufficient information tell me, I need to get deferred shading working. Things I'm not able to do: create a rendertarget which holds all lights, merge colorMap, normalMap and lightMap to one finalMap and presenting this to the user.

    Read the article

  • Cloning a dual boot system from HDD to SSD

    - by Alex
    I'm planning on replacing my laptop's HDD with a 256GB SSD, but I have a dual-boot (12.04 and Windows 7) setup and I'd like to be able to directly migrate Ubuntu over without having to reinstall and lose all of my settings. GParted reports the following partition setup on my HDD. I am, of course, able to modify it if necessary. /dev/sda1 (NTFS) 66.92 out of 200.00 MB used I'm honestly not sure what this partition is for. Maybe for Windows 7 system files? I'm hesitant to mess with it. (edit; it turns out it is a partition for Windows recovery files in the event of OS corruption, so I don't want to remove it. Plus it also appears to be a major pain to remove anyways) /dev/sda2 (NTFS) 116.35 out of 339.06 GB used (boot) This partition is the C:/ drive on my Windows installation. I don't use it on my Ubuntu installation, except it is the boot partition and thus has grub on it. /dev/sda4 (extended) > /dev/sda5 (ext4) 14.49 out of 91.34 GB used > /dev/sda6 (linux-swap) 5.92 GB These are my Ubuntu partitions. /sda5 contains my documents and all of the files I use on Ubuntu, and (as far as I know) the system files for Ubuntu itself (it's the partition I created when prompted by the Live-DVD installer). /sda6 is, of course, the swap partition which I only need for hibernation (6GB of RAM). /dev/sda3 (NTFS) 9.89 out of 14.75 GB used This is an annoying partition that Lenovo created to store some drivers and files that I might need later on. For example, it allows me to use OneKeyRecovery for a quick factory recovery if absolutely necessary, not sure if that'll work on an SSD. It also contains not-so-important files for bloatware installation. In total, my HDD only has about 150GB of files on it so it should fit comfortably on the SSD. The problem is, I want to exactly migrate my files, partitions, OSes, MBR, etc. from my HDD to my SSD and I'm not quite sure how to do this. I've seen CloneZilla referenced before, but I'm not all too experienced and the documentation for it quite frankly seems a bit like a foreign language to me. So, put simply, is there any way I can exactly clone this HDD to an SSD without a massive headache? Also, if it matters, I'll probably be using an external hard drive case (as recommended in online tutorials) to externally attach the SSD to my laptop during the cloning process due to the lack of two hard drive slots in the machine.

    Read the article

  • 6 Ways to Modernize Your Customer Experience

    - by Mike Stiles
    If customers have changed, if the way they research and shop have changed, if their expectations have changed, if their ability to act on dissatisfaction has changed, but your customer experience has NOT changed, what was once “good enough” may now be crippling. Well, the customer has changed, and why wouldn’t they? You’ve probably changed too in your role as consumer. There’s more info available, it’s easier to get, there’s more choice, you’re more mobile, you’re more connected, it’s easier to buy, and yes, it’s easier to switch brands if experiences don’t meet your now higher expectations. Thanks to technological advances, we as marketers can increasingly work borderline miracles. But if we’re still not adamantly adopting customer centricity, and if we aren’t making the customer experience paramount amongst business goals, the tech is wasted. A far more modern customer experience is called for. Here are 6 ways to get there: 1. Modern Marketing: Marketing data is aggregated and targeted to the right customers, who are getting personal, relevant communications. In return, you’re getting insight that finally properly attributes revenue to your marketing efforts. 2. Modern Selling: Demand is being driven across all channels with modern selling tools. Productivity is up thanks to coordinated communication and selling, and performance is ever optimized using powerful analytics. 3. Modern CPQ: You’re cross-selling and upselling more effectively since reps and channel partners have been empowered with the ability to quickly, automatically generate 100% accurate, customer-friendly quotes complete with price controls and automated approvals. 4. Modern Commerce: You’re leveraging data and delivering personalized, targeted digital experiences to everyone. You’re attracting more visitors, and you’re able to scale and keep up with the market and control the experience. 5. Modern Service: You’re better serving your customers by making it easier for them to engage with your brand, plus you’re lowering your costs by increasing agent and tech support efficiencies. 6. Modern Social: You’re getting faster, deeper, more accurate insights from social and turning content around faster, which then goes out to the right people at the right time in the right place. You’ve also gotten proactive in your service, and customers love that. For far too many brands, the buying journey of Need, Research, Select, Buy, Use, Recommend across the multiple connect points of Social, Mobile, Store, Call Center, Site, Ecommerce is a disconnected mess. Oracle’s approach to CX is to connect every interaction your customer has with your brand, avoiding the revenue losses lousy customer experiences bring. How important is the experience to customers? 94% are willing to pay more of their hard-earned money to have better ones, while a meager 1% say they get the good, consistent experiences they expect. Brands, your words aren’t as loud anymore, so your actions as they relate to customer experience are going to have to do the talking. @mikestiles @oraclesocialPhoto: Julien Tromeur, freeimages.com

    Read the article

  • 80 Years of Supplier Misinformation: How can Oracle Supplier Hub Help?

    - by Mala Narasimharajan
    By Mark Peachy       Well, we're down to the final week before this year's Oracle Open World conference kicks-off on Sunday and there's still plenty of work to be done to be ready in time.  One of the great benefits I think that attendees get from Open World is the opportunity to listen to other organizations talk about their implementation experiences.  Typically, these sessions provide hugely valuable insights that have been gained during a deployment, delivering a wealth of practical information on what it really takes to get an organization up and running with a new module or a revamped business process.And I'm not just saying this because we're lucky enough to have one of our early implementers join us for this year's Supplier Hub/Supplier Lifecycle Management MDM session!  With a multi-phased deployment underway, this customer is working to fix a long, 80-year history without much in the way of formal processes or tools to manage all of their accumulated supplier information.  Faced with a mess of supplier details, they had been challenged to efficiently track supplier spend, monitor performance, maintain qualification information or carry out meaningful risk analysis.  Join us on Wednesday to hear how they are addressing these issues and the plans they have to evolve their supplier management techniques - it's a great story.CON9242:  Oracle Supplier Lifecycle Management and Oracle Supplier Hub for Better Supply Base Management Wednesday, October 3rd at 1:15 PM                                                                                                                                                InterContinental Hotel, Sutter Suite

    Read the article

  • What to do when opensource project starts to tear apart? (or a manager tries to write code and than shouts at the team)

    - by Kabumbus
    Imagine there is an open source cross-platform project on Google code. It has lots of revisions (1000). It concentrates in itself lots technological stuff - rare stuff - it mixes top tech. It contains server, and more than one client. The project was created by a well-connected team of developers (friends) and a manager that was sponsoring project at its start up during its first few months (project now is more than a year old-sponsoring oss project is a big good deal- also gave the idea of project to developers). The project was growing in complexity and effort reqiered to continue development. Once upon a time a manager - team leader started trying to write code (he was a programmer in some other projects - not the best, but he felt like he was one). He started because one of the developers suggested an idea at the team meeting and he felt he just needed to do it on his own. He failed, and he told the dev team about it. The dev team did what he failed to do in a few days. After that, the manager feels that team codes with out him perfectly and gets the job done in short time. He felt sorry and lost and he started to crash like an old bad PC. Firstly, he started to scream (in forms of messages not in voice) he tried to tell developers that what they were doing was a bad, not-needed thing - developers kindly told him that his "beginnings" were not compilable while dev team product worked as needed. He told the developers that all work they do should be firstly discussed with him. Here is the part where we need to mention that all team members are "project owners" and logically have equal rights. The team leader suggested to the developers these options: change their dev process to go through him, or be moved from project owners to contributers. So what are our options as developers? What arguments we can provide to the team leader/manager for him to calm down? Is it possible to save the project or is it better to fork out now? An important issue is that lately we had no active ticket system, and I personally think that this was the reason the mess appeared. So... any ideas?

    Read the article

  • Why does my int, booleans, doubles does not work?

    - by SystemNetworks
    As you see, my code does not work. When armor1 is true, it would add my life. goldA is another class. public void goldenArmor(GameContainer gc, StateBasedGame sbg, Graphics g) { if(armor1==true) { goldA.life = life; goldA.intelligence = intelligence; goldA.power = power; goldA.lifeLeft = lifeLeft; goldA.head(); goldA.body(); goldA.legs(); } } My other class: package javagame; import org.newdawn.slick.GameContainer; import org.newdawn.slick.Graphics; import org.newdawn.slick.Image; import org.newdawn.slick.Input; import org.newdawn.slick.SlickException; /* Note: Copyright(C)2012 System Networks | Square NET | Julius Bryan Gambe. You cannot copy the style, story of the game and gameplay! To programmers: The int,doubles,strings,booleans are properly sorted out. Please don't mess it up. */ /* NOTE: We have loops but not for programming. The loop is: 1.show the world to user 2.Obtain input from the user 3.Shows the update, repeat step 1 */ import org.newdawn.slick.*; import org.newdawn.slick.state.*; import org.lwjgl.input.Mouse; //contents: // public class GoldenArmor{ //get it from play public int life; public double intelligence; public int lifeLeft; public double power; public GoldenArmor() { // TODO Auto-generated constructor stub } //start here public void head() { life += 10; intelligence +=0.5; } public void body() { lifeLeft += 100; } public void legs() { power += 100; } } /* SYSTEM NETWORKS(C) 2012 NET FRONT */ The life, intelligence, power, lifeLeft are nothing but to use it as just reference to prevent stack overflow. And at my main class, it becomes my real booleans, int, doubles. How do I fix this? It does not add it to my normal int.

    Read the article

  • SQL SERVER – Fix: Error: 147 An aggregate may not appear in the WHERE clause unless it is in a subquery contained in a HAVING clause or a select list, and the column being aggregated is an outer reference

    - by pinaldave
    Everybody was beginner once and I always like to get involved in the questions from beginners. There is a big difference between the question from beginner and question from advanced user. I have noticed that if an advanced user gets an error, they usually need just a small hint to resolve the problem. However, when a beginner gets error he sometimes sits on the error for a long time as he/she has no idea about how to solve the problem as well have no idea regarding what is the capability of the product. I recently received a very novice level question. When I received the problem I quickly see how the user was stuck. When I replied him with the solution, he wrote a long email explaining how he was not able to solve the problem. He thanked multiple times in the email. This whole thing inspired me to write this quick blog post. I have modified the user’s question to match the code with AdventureWorks as well simplified so it contains the core content which I wanted to discuss. Problem Statement: Find all the details of SalesOrderHeaders for the latest ShipDate. He comes up with following T-SQL Query: SELECT * FROM [Sales].[SalesOrderHeader] WHERE ShipDate = MAX(ShipDate) GO When he executed above script it gave him following error: Msg 147, Level 15, State 1, Line 3 An aggregate may not appear in the WHERE clause unless it is in a subquery contained in a HAVING clause or a select list, and the column being aggregated is an outer reference. He was not able to resolve this problem, even though the solution was given in the query description itself. Due to lack of experience he came up with another version of above query based on the error message. SELECT * FROM [Sales].[SalesOrderHeader] HAVING ShipDate = MAX(ShipDate) GO When he ran above query it produced another error. Msg 8121, Level 16, State 1, Line 3 Column ‘Sales.SalesOrderHeader.ShipDate’ is invalid in the HAVING clause because it is not contained in either an aggregate function or the GROUP BY clause. What he wanted actually was the SalesOrderHeader all the Sales shipped on the last day. Based on the problem statement what the right solution is as following, which does not generate error. SELECT * FROM [Sales].[SalesOrderHeader] WHERE ShipDate = (SELECT MAX(ShipDate) FROM [Sales].[SalesOrderHeader]) Well, that’s it! Very simple. With SQL Server there are always multiple solution to a single problem. Is there any other solution available to the problem stated? Please share in the comment. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Error Messages, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – ERROR: FIX using Compatibility Level – Database diagram support objects cannot be installed because this database does not have a valid owner – Part 2

    - by pinaldave
    Earlier I wrote a blog post about how to resolve the error with database diagram. Today I faced the same error when I was dealing with a database which is upgraded from SQL Server 2005 to SQL Server 2008 R2. When I was searching for the solution online I ended up on my own earlier solution SQL SERVER – ERROR: FIX – Database diagram support objects cannot be installed because this database does not have a valid owner. I really found it interesting that I ended up on my own solution. However, the solution to the problem this time was a bit different. Let us see how we can resolve the same. Error: Database diagram support objects cannot be installed because this database does not have a valid owner. To continue, first use the Files page of the Database Properties dialog box or the ALTER AUTHORIZATION statement to set the database owner to a valid login, then add the database diagram support objects. Workaround / Fix / Solution : Follow the steps listed below and it should for sure solve your problem. (NOTE: Please try this for the databases upgraded from previous version. For everybody else you should just follow the steps mentioned here.) Select your database >> Right Click >> Select Properties Go to the Options In the Dropdown at right labeled “Compatibility Level” choose “SQL Server 2005(90)” Select FILE in left side of page In the OWNER box, select button which has three dots (…) in it Now select user ‘sa’ or NT AUTHORITY\SYSTEM and click OK. This will solve your problem. However, there is one very important note you must consider. When you change any database owner, there are always security related implications. I suggest you check your security policies before changing authorization. I did this to quickly solve my problem on my development server. If you are on production server, you may open yourself to potential security compromise. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Error Messages, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Legitimate use of the Windows "Documents" folder in programs.

    - by romkyns
    Anyone who likes their Documents folder to contain only things they place there knows that the standard Documents folder is completely unsuitable for this task. Every program seems to want to put its settings, data, or something equally irrelevant into the Documents folder, despite the fact that there are folders specifically for this job1. So that this doesn't sound empty, take my personal "Documents" folder as an example. I don't ever use it, in that I never, under any circumstances, save anything into this folder myself. And yet, it contains 46 folders and 3 files at the top level, for a total of 800 files in 500 folders. That's 190 MB of "documents" I didn't create. Obviously any actual documents would immediately get lost in this mess. My question is: can anything be done to improve the situation sufficiently to make "Documents" useful again, say over the next 5 years? Can programmers be somehow educated en-masse not to use it as a dumping ground? Could the OS start reporting some "fake" location hidden under AppData through the existing APIs, while only allowing Explorer and the various Open/Save dialogs to know where the "real" Documents folder resides? Or are any attempts completely futile or even unnecessary? 1For the record, here's a quick summary of the various standard directories that should be used instead of "Documents": RoamingAppData for user-specific data and settings. This is the directory to use for user-specific non-temporary data. Anything placed here will be available on any machine that a given user logs on to in networks where this is configured. Do not place large files here though, because they slow down login/logout in such environments. LocalAppData for user-and-machine-specific data and settings. This data differs for every user and every machine. This is also where very large user-specific data should be placed. ProgramData for machine-specific data and settings. These are the same regardless of which user is logged on, and will not roam to other machines in a network. GetTempPath for all files that may be wiped without loss of data when not in use. This is also the place for things like caches, because like temporary data, a cache does not need to be backed up. Place your huge cache here and you'll save your user some backup trouble. "Documents" itself should only ever be used if the user specified it manually by entering a path or selecting it in a Save dialog. That is the only time it is ever appropriate to save stuff in "Documents".

    Read the article

  • On the art of self-promotion

    - by Tony Davis
    I attended Brent Ozar's Building the Fastest SQL Servers session at Tech Ed last week, and found myself engulfed in a 'perfect storm' of excellent technical and presentational skills coupled with an astute awareness of the value of promoting one's work. I spend a lot of time at such events talking to developers and DBAs about the value of blogging and writing articles, and my impression is that some could benefit from a touch less modesty and a little more self-promotion. I sense a reticence in many would-be writers. Is what I have to say important enough? Haven't far more qualified and established commentators, MVPs and so on, already said it? While it's a good idea to pick reasonably fresh and interesting topics, it's more important not to let such fears lead to writer's block. In the eyes of any future employer, your published writing is an extension of your resume. They will not care that a certain MVP knows how to solve problem x, but they will be very interested to see that you have tackled that same problem, and solved it in your own way, and described the process in your own voice. In your current job, your writing is one of the ways you can express to your peers, and to the organization as a whole, the value of what you contribute. Many Developers and DBAs seem to rely on the idea that their work will speak for itself, and that their skill shines out from it. Unfortunately, this isn't always true. Many Development DBAs, for example, will be painfully aware of the massive effort involved in tuning and adding resilience to rapidly developed applications. However, others in the organization who are unaware of what's involved in getting an application that is 'done' ready for production may dismiss such efforts as fussiness or conservatism. At the dark end of the development cycle, chickens come home to roost, but their droppings tend to land on those trying to clear up the mess. My advice is this: next time you fix a bug or improve the resilience or performance of a database or application, make sure that you use team meetings, informal discussions and so on to ensure that people understand what the problem was and what you had to do to fix it. Use your blog to describe, generally, the process you adopted, the resources you used and the insights that came from your work. Encourage your colleagues to do the same. By spreading the art of self-promotion to everyone involved in an IT project, we get a better idea of the extent of the work and the value of the contribution of all the team members. As always, we'd love to hear what you think. This very week, Simple-talk launches its new blogging platform. If any of this has moved you to 'throw your hat into the ring', drop us a mail at [email protected]. Cheers, Tony.

    Read the article

  • Has test driven development (TDD) actually benefited a real world project?

    - by James
    I am not new to coding. I have been coding (seriously) for over 15 years now. I have always had some testing for my code. However, over the last few months I have been learning test driven design/development (TDD) using Ruby on Rails. So far, I'm not seeing the benefit. I see some benefit to writing tests for some things, but very few. And while I like the idea of writing the test first, I find I spend substantially more time trying to debug my tests to get them to say what I really mean than I do debugging actual code. This is probably because the test code is often substantially more complicated than the code it tests. I hope this is just inexperience with the available tools (RSpec in this case). I must say though, at this point, the level of frustration mixed with the disappointing lack of performance is beyond unacceptable. So far, the only value I'm seeing from TDD is a growing library of RSpec files that serve as templates for other projects/files. Which is not much more useful, maybe less useful, than the actual project code files. In reading the available literature, I notice that TDD seems to be a massive time sink up front, but pays off in the end. I'm just wondering, are there any real world examples? Does this massive frustration ever pay off in the real world? I really hope I did not miss this question somewhere else on here. I searched, but all the questions/answers are several years old at this point. It was a rare occasion when I found a developer who would say anything bad about TDD, which is why I have spent as much time on this as I have. However, I noticed that nobody seems to point to specific real-world examples. I did read one answer that said the guy debugging the code in 2011 would thank you for have a complete unit testing suite (I think that comment was made in 2008). So, I'm just wondering, after all these years, do we finally have any examples showing the payoff is real? Has anybody actually inherited or gone back to code that was designed/developed with TDD and has a complete set of unit tests and actually felt a payoff? Or did you find that you were spending so much time trying to figure out what the test was testing (and why it was important) that you just tossed out the whole mess and dug into the code?

    Read the article

  • Learning by doing (and programming by trial and error)

    - by AlexBottoni
    How do you learn a new platform/toolkit while producing working code and keeping your codebase clean? When I know what I can do with the underlying platform and toolkit, I usually do this: I create a new branch (with GIT, in my case) I write a few unit tests (with JUnit, for example) I write my code until it passes my tests So far, so good. The problem is that very often I do not know what I can do with the toolkit because it is brand new to me. I work as a consulant so I cannot have my preferred language/platform/toolkit. I have to cope with whatever the customer uses for the task at hand. Most often, I have to deal (often in a hurry) with a large toolkit that I know very little so I'm forced to "learn by doing" (actually, programming by "trial and error") and this makes me anxious. Please note that, at some point in the learning process, usually I already have: read one or more five-stars books followed one or more web tutorials (writing working code a line at a time) created a couple of small experimental projects with my IDE (IntelliJ IDEA, at the moment. I use Eclipse, Netbeans and others, as well.) Despite all my efforts, at this point usually I can just have a coarse understanding of the platform/toolkit I have to use. I cannot yet grasp each and every detail. This means that each and every new feature that involves some data preparation and some non-trivial algorithm is a pain to implement and requires a lot of trial-and-error. Unfortunately, working by trial-and-error is neither safe nor easy. Actually, this is the phase that makes me most anxious: experimenting with a new toolkit while producing working code and keeping my codebase clean. Usually, at this stage I cannot use the Eclipse Scrapbook because the code I have to write is already too large and complex for this small tool. In the same way, I cannot use any more an indipendent small project for my experiments because I need to try the new code in place. I can just write my code in place and rely on GIT for a safe bail-out. This makes me anxious because this kind of intertwined, half-ripe code can rapidly become incredibly hard to manage. How do you face this phase of the development process? How do you learn-by-doing without making a mess of your codebase? Any tips&tricks, best practice or something like that?

    Read the article

  • SMTP POP3 & PST. Acronyms from Hades.

    - by mikef
    A busy SysAdmin will occasionally have reason to curse SMTP. It is, certainly, one of the strangest events in the history of IT that such a deeply flawed system, designed originally purely for campus use, should have reached its current dominant position. The explanation was that it was the first open-standard email system, so SMTP/POP3 became the internet standard. We are, in consequence, dogged with a system with security weaknesses so extreme that messages are sent in plain text and you have no real assurance as to who the message came from anyway (SMTP-AUTH hasn't really caught on). Even without the security issues, the use of SMTP in an office environment provides a management nightmare to all commercial users responsible for complying with all regulations that control the conduct of business: such as tracking, retaining, and recording company documents. SMTP mail developed from various Unix-based systems designed for campus use that took the mail analogy so literally that mail messages were actually delivered to the users, using a 'store and forward' mechanism. This meant that, from the start, the end user had to store, manage and delete messages. This is a problem that has passed through all the releases of MS Outlook: It has to be able to manage mail locally in the dreaded PST file. As a stand-alone system, Outlook is flawed by its neglect of any means of automatic backup. Previous Outlook PST files actually blew up without warning when they reached the 2 Gig limit and became corrupted and inaccessible, leading to a thriving industry of 3rd party tools to clear up the mess. Microsoft Exchange is, of course, a server-based system. Emails are less likely to be lost in such a system if it is properly run. However, there is nothing to stop users from using local PSTs as well. There is the additional temptation to load emails into mobile devices, or USB keys for off-line working. The result is that the System Administrator is faced by a complex hybrid system where backups have to be taken from Servers, and PCs scattered around the network, where duplication of emails causes storage issues, and document retention policies become impossible to manage. If one adds to that the complexity of mobile phone email readers and mail synchronization, the problem is daunting. It is hardly surprising that the mood darkens when SysAdmins meet and discuss PST Hell. If you were promoted to the task of tormenting the souls of the damned in Hades, what aspects of the management of Outlook would you find most useful for your task? I'd love to hear from you. Cheers, Michael

    Read the article

  • Working the Chart Percentages

    - by Tim Dexter
    Charting in BIP is such fun, well sometimes it is. Not so much today, at least not for Ron in San Diego. He needed a horizontal bar chart showing values plotted for various test areas with value labels at the end of the bars. Simple enough right? The wrinkle, they were percentage values so he needed to see '56%' not '56'! Still, it should be simple enough but the percentage formatting has a requirement for your values to be in a decimal format i.e. 0.56 not 56.0. 56.0 gets formatted as 5600%. OK, so either pull the values out as decimals or use the div function to divide the values in the chart by 100 e.g. <xsl:value-of select="myval div 100)" /> Now I can use the following the chart XML to format the percentages as I need them:   <Graph ... > ... <MarkerText visible="true"> <Y1ViewFormat> <ViewFormat numberType="NUMTYPE_PERCENT" decimalDigit="0" numberTypeUsed="true" leadingZeroUsed="true" decimalDigitUsed="true"/> </Y1ViewFormat> </MarkerText> ... </Graph>   That gets me the values shown the way I want but the auto axis formatting gets me from 0 >> 1. I now need to go in and add the formatting for the axis too.   <Graph ...> ... <Y1Axis axisMinAutoScaled="false" axisMinValue="0.0" axisMaxAutoScaled="false" axisMaxValue="1.0" majorTickStepAutomatic="true"> <ViewFormat numberType="NUMTYPE_PERCENT" decimalDigit="0" scaleFactor="SCALEFACTOR_NONE" numberTypeUsed="true" leadingZeroUsed="true" decimalDigitUsed="true" scaleFactorUsed="true"/> </Y1Axis>   Now I have a chart that's showing the percentage values and formatting axis scale correctly for me too. You can of course mess with the attributes above to get more decimal points on your labels, etc. Happy Charting!

    Read the article

  • New laptop, Windows 8.1, attempting dual install. Ubuntu installer doesn't 'see' existing OS

    - by Flaminica
    Though I've used Ubuntu for a few years, I'm new to installation. Previously I had help and now I'm doing it alone (moved across the world). Windows 8.1 came preinstalled on my new laptop (Toshiba Satellite C70-A-17C - Core i5, 8 GB RAM, 750 GB HDD). I have already followed a few steps I found online to prepare for a dual install (with Ubuntu 14.04). I backed up Windows, created a bootable Ubuntu USB and DVD (just in case one didn't work), turned off fast boot and secure boot, and shrunk C:/. The new unallocated drive portion is 292.97 GB. After shrinking C:/, I restarted Windows a couple of times to make sure everything was working fine (it is). I then attempted to install with the Ubuntu live USB. However, the Ubuntu installer doesn't see that Windows 8.1 is already installed. I don't understand, and don't want to mess with Ubuntu partitioning when I don't know where the partitions will be created. My concern is that, if I go further with the installation process, Windows might be overwritten or compromised in some way. I then tried to reboot using the Ubuntu live DVD, thinking I might get a different result. However, I can't figure out how to make the laptop boot from the CD drive. I went into the BIOS and found no option there, either. Any help is very appreciated! EDIT: Looks like I can't link directly to each photo. Here is my album of screenshots: http://imgur.com/a/zChCo Here you can see that there's no option to boot from CD drive, only USB. Everything looks okay so far. I don't understand this. Ubuntu has not yet been installed. Unmounting partitions? (I chose 'no'.) Even though the laptop came pre-installed with Windows 8.1, the Ubuntu USB installer can't see it. I chose 'something else'. I need to pick and format partitions. I scrolled down and took a second shot to include all information. Completely lost and cancelled installation.

    Read the article

  • Windows 7 XP Mode disable time sync

    - by Oskar Duveborn
    So I've tried the trick from Virtual PC 2007, adding the following section to the vmc configuration file: <components> <host_time_sync> <enabled type="boolean">false</enabled> </host_time_sync> </components> Later someone suggested VPC doesn't want the components level so added this instead: <host_time_sync> <enabled type="boolean">false</enabled> <frequency type="integer">15</frequency> <threshold type="integer">10</threshold> </host_time_sync> When I start up XP Mode (Microsoft Virtual PC) it completely ignores any of these two configuration changes and if I change the clock it's instantly reset to the host time again. I've also obviously disabled the Windows Time service but as it's not joined to a domain or set up with a source it shouldn't be involved anyway. I need to test an application over a few midnight passes and thought the XP Mode machine would be perfect, so I didn't have to mess with my workstation clock... is there any way to get the VPC guest to not sync time with the host? This is easy in Hyper-V ;p

    Read the article

  • getting a pyserial not loaded error

    - by skinnyTOD
    I'm getting a "pyserial not loaded" error with the python script fragment below (running OSX 10.7.4). I'm trying to run a python app called Myro for controlling the Parallax Scribbler2 robot - figured it would be a fun way to learn a bit of Python - but I'm not getting out of the gate here. I've searched out all the Myro help docs but like a lot in-progress open source programs, they are a moving target and conflicting, out of date, or not very specific about OSX. I have MacPorts installed and installed py27-serial without error. MacPorts lists the python versions I have installed, along with the active version: Available versions for python: none python24 python25 python25-apple python26 python26-apple python27 python27-apple (active) python32 Perhaps stuff is getting installed in the wrong places or my PATH is wrong (I don't much know what I am doing in Terminal and have probably screwed something up). Trying to find out about my sys.path - here's what I get: import sys sys.path ['', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python27.zip', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-darwin', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-mac', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-mac/lib-scriptpackages', '/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-tk', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-old', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-dynload', '/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/PyObjC', '/Library/Python/2.7/site-packages'] Is that a mess? Can I fix it? Anyway, thanks for reading this far. Here's the python bit that is throwing the error. The error occurs on 'try import serial'. # Global variable robot, to set SerialPort() robot = None pythonVer = "?" rbString = None ptString = None statusText = None # Now, let's import things import urllib import tempfile import os, sys, time try: import serial except: print("WARNING: pyserial not loaded: can't upgrade!") sys.exit() try: input = raw_input # Python 2.x except: pass # Python 3 and better, input is defined try: from tkinter import * pythonver = "3" except: try: from Tkinter import * pythonver = "2" except: pythonver = "?"

    Read the article

  • PowerShell: New-PSDrive error handling

    - by mazebuhu
    Hello, I have a script where I mount with the command "New-PSDrive" a network drive. Now, since the script is running as a "cronjob" on a server I want to have some error detection. If for any reason the command New-PSDrive fails the script should stop executing and notify that something went wrong. I have the following code: Try { New-PSDrive -Name A -PSProvider FileSystem -Root \\server\share } Catch { ... handle error case ... } ... other code ... For testing reasons I specified a wrong server name and I get the following error "New-PSDrive : Drive root "\wrongserver\share" does not exist or it's not a folder". Which is OK since the server does not exists. But the script does not go into the Catch clause and stop. It happily continues to run and ends up in a mess since no drive is mounted :-) So my question, why? Is there any difference in Exception handling in PowerShell? I should also note that I'm a noob in PowerShell scripting. Bye, Martin

    Read the article

  • How to mount a LOFS in Solaris that doesn’t cross mountpoints

    - by jcea
    I need to access my "root" ZFS dataset to delete a file under "/var". But "/var" is overlayed by another ZFS dataset. Since these are system datasets I can't "umount" them while the machine is running. And I want to avoid to reboot the system in "failsafe" mode, since this is a production machine. Teorically ZFS would refuse to mount "/var" dataset over the underlying "/var", because it is not empty. But it works, possibly because they are system datasets mounted early in the boot process. But having the underlying "/var" not empty is preventing me to create an ABE (Alternate Boot Environment), so patching is risky, and I can't upgrade my system using Live Upgrade. The machine is remote. I have an IP KVM, but I rather prefer to avoid booting this machine in "failsafe" mode, if I can. I know there is a file in "/var/" because I can snapshot the "root" dataset and check it. But snapshots are read-only, so I can't get rid of the file. I tried "mkdir /tmp/zzz; mount -F lofs / /tmp/zzz", but when I go to "/tmp/zzz/var", I see the "/var" dataset, not the underlying "root" dataset. That is, the LOFS is crossing mountpoints. I would usually like it, but not this time!. Any suggestion, beside rebooting the machine in "failsafe" and mess with it thru the IP KVM?

    Read the article

  • How to boot Linux from a 16gb USB flash drive

    - by Chris Harris
    I'm trying to install Linux on a single partition of a USB flash drive that's larger than 4gb. The first place I went to is http://pendrivelinux.com. I can follow these instructions for installing Xubuntu 9.04 perfectly, which unfortunately break down when I try to scale it up beyond 4gb. There are several other tools to do this (unetbootin and usb-creator) which follow a very similar formula. I figured out that a big problem of mine was that all of these tools assume the USB drive is formatted in FAT32, which unfortunately cannot hold a single file larger than 4gb. This is unfortunate because I want to use just one partition, so that my persistance file, casper-rw, looks like one big partition to the OS once I've booted off of the USB drive. I then tried following a myriad of instructions involving formatting the drive as one large ext2 filesystem and using extlinux to create a single bootable ext2 file system. This doesn't work for me however, after about 20 attempts verifying and slightly tweaking the formula, I cannot seem to get a "good" bootable ext2 file system built. I'm not entirely sure what's going on, but it seems as though no matter how hard I try, I cannot get the ext2 file system to remain coherent after copying the Linux ISO contents over, copying the MBR, and executing extlinux to create the ext bootloader. Every time, after I follow these steps (in any order) and reboot, I get an unbootable USB drive. If I then mount the drive under Linux again, I see a mess of a file system (inodes have clearly been screwed up somewhere along the way). I suspected that the USB drive wasn't being fully flushed, so I tried using the "sync" and "unmount" commands before rebooting which didn't affect things at all. I guess I have several possible questions - but let's start with the obvious - is there something I'm missing to create a bootable ext2 USB flash drive that's large (e.g. 16gb)?

    Read the article

  • SharePoint 2010 User Profile Synchronization

    - by manemawanna
    Hello, I'm completely new to working with SharePoint and Windows Server, but last week I was given a small brief to play with SharePoint 2010 to see how I got along with it. Anyway I've set up a SharePoint server and had a mess around to get some new sites and pages created etc, but I'm now looking to have a try at importing some AD groups. As part of this I've look at these tutorials, here and here. So far I've got through to the process of starting the User Profile Service which works fine, but when I get it starting the User Profile Synchronization service it sits on starting. But when I refresh the page or go to the monitoring section it shows it as aborted. Now I'm new to administering servers like I say and when I start the User Profile Synchronization service it tries to run as NT AUTHORITY\NETWORK SERVICE and asks for a password so I've been providing it with the admin password, now I'm not sure if this is part of the issue or not as I've checked the log files and they seem to say that it doesn't have permissions, which is fair enough, but I can't see how you can change the account even if I wanted to. So if anyone could help it would be appreciated, if you need any further information to help with an answer, just let me know.

    Read the article

  • linux nooB: Installing ffmpeg + dependencies on aws linux ami (repo issues)

    - by HdN8
    Im installing ffmpeg to run on an amazon linux ami, and have added the rpmforge repo and the dag repo. Here are some guidelines I'm using for reference: TWoZaO and Razuna The rpmforge repo has ffmpeg, but if you try to install it then it will complain that is missing dependencies (for me libSDL-1.2.so.0()(64bit)). Regardless I will install ffmpeg from svn so I can be sure to enable the options I want (namely libx264). It seems strange to me though that SDL is not in rpmforge or dag, and in according to both of my references above, it should be there. I tried to grab it manually from here, but it needs these dependencies, so no-go: error: Failed dependencies: SDL = 1.2.10-8.el5 is needed by SDL-devel-1.2.10-8.el5.x86_64 alsa-lib-devel is needed by SDL-devel-1.2.10-8.el5.x86_64 libGL-devel is needed by SDL-devel-1.2.10-8.el5.x86_64 libGLU-devel is needed by SDL-devel-1.2.10-8.el5.x86_64 libSDL-1.2.so.0()(64bit) is needed by SDL-devel-1.2.10-8.el5.x86_64 libX11-devel is needed by SDL-devel-1.2.10-8.el5.x86_64 libXext-devel is needed by SDL-devel-1.2.10-8.el5.x86_64 libXrandr-devel is needed by SDL-devel-1.2.10-8.el5.x86_64 libXrender-devel is needed by SDL-devel-1.2.10-8.el5.x86_64 libXt-devel is needed by SDL-devel-1.2.10-8.el5.x86_64 Any advice for a linux nooB lost in a mess of repos and dependency errors?

    Read the article

< Previous Page | 122 123 124 125 126 127 128 129 130 131 132 133  | Next Page >