Search Results

Search found 24117 results on 965 pages for 'write through'.

Page 282/965 | < Previous Page | 278 279 280 281 282 283 284 285 286 287 288 289  | Next Page >

  • SSAS Compare: an intern’s journey

    - by Red Gate Software BI Tools Team
    About a month ago, David mentioned an intern working in the BI Tools Team. That intern happens to be me! In five weeks’ time, I’ll start my second year of Computer Science at the University of Cambridge and be a full-time student again, but for the past eight weeks, I’ve been living a completely different life. As Jon mentioned before, the teams here at Red Gate are small and everyone (including the interns!) is responsible for the product as a whole. I’ve attended planning sessions, UX tests, daily meetings, and everything else a full-time member of the team would; I had as much say in where we would go next with the product as anyone; I was able to see that what I was doing was an important part of the product from the feedback we got in the UX tests. All these things almost made me forget that this is just an internship and not my full-time job. First steps at Red Gate Being based in Cambridge, Red Gate has many Cambridge university graduates working for them. They also hire some Cambridge undergraduates for internships each summer. With its popularity with university graduates and its great working environment, Red Gate has managed to build up a great reputation. When I thought of doing an internship here in Cambridge, Red Gate just seemed to be the obvious choice for my first real work experience. On my first day at Red Gate, David, the lead developer for SSAS Compare, helped me settle in and explained what I’d be doing. My task was to improve the user experience of displaying differences between MDX scripts by syntax highlighting, script formatting, and improving the difference identification in the first place. David suggested how I should approach the problem, but left all the details and design decisions to me. That was when I realised how much independence and responsibility I’d have. What I’ve done If you launch the latest version of SSAS Compare and drill down to an MDX script difference, you can see the changes that have been made. In earlier versions, you could only see the scripts in plain text on both sides — either in black or grey, depending on whether they were the same or not. However, you couldn’t see exactly where the scripts were different, which was especially annoying when the two scripts were large – as they often are. Furthermore, if parts of the two scripts were formatted differently, they seemed to be different but were actually the same, which caused even more confusion and made it difficult to see where the differences were. All these issues have been fixed now. The two scripts are automatically formatted by the tool so that if two things are syntactically equivalent, they look the same – including case differences in keywords! The actual difference is highlighted in grey, which makes them easy to spot. The difference identification has been improved as well, so two scripts aren’t identified as different if there’s just a difference in meaningless whitespace characters, or when you have “select” on one side and “SELECT” on the other. We also have syntax highlighting, which makes it easier to read the scripts. How I did it In order to do the formatting properly, we decided to parse the MDX scripts. After some investigation into parser builders, I decided to go with the GOLD Parser builder and the bsn-goldparser .NET engine. GOLD Parser builder provides a fairly nice GUI to write, build, and test grammar in. We also liked the idea of separating the grammar building from parsing a text. The bsn-goldparser is one of many .NET engines for GOLD, and although it doesn’t support the newest features of GOLD Parser, it has “the ability to map semantic action classes to terminals or reduction rules, so that a completely functional semantic AST can be created directly without intermediate token AST representation, and without the need for glue code.” That makes it much easier for us to change the implementation in our program when we change the grammar. As bsn-goldparser is open source, and I wanted some more features in it, I contributed two new features which have now been merged to the project. Unfortunately, there wasn’t an MDX grammar written for GOLD already, so I had to write it myself. I was referencing MSDN to get the formal grammar specification, but the specification was all over the place, so it wasn’t that easy to implement and find. We’re aware that we don’t yet fully support all valid MDX, so sometimes you’ll just see the MDX script difference displayed the old way. In that case, there is some grammar construct we don’t yet recognise. If you come across something SSAS Compare doesn’t recognise, we’d love to hear about it so we can add it to our grammar. When some MDX script gets parsed, a tree is produced. That tree can then be processed into a list of inlines which deal with the correct formatting and can be outputted to the screen. Doing all this has led me to many new technologies and projects I haven’t worked with before. This was my first experience with C# and Visual Studio, although I have done things in Java before. I have learnt how to unit test with NUnit, how to do dependency injection with Ninject, how to source-control code with SVN and Mercurial, how to build with TeamCity, how to use GOLD, and many other things. What’s coming next Sadly, my internship comes to an end this week, so there will be less development on MDX difference view for a while. But the team is going to work on marking the differences better and making it consistent with difference indication in the top part of comparison window, and will keep adding support for more MDX grammar so you can see the differences easily in every comparison you make. So long! And maybe I’ll see you next summer!

    Read the article

  • How to customise search core results web part Part1

    - by ybbest
    In this post, I’d like to show you how to customise search core results web part. It is a quite simple, most of the times what you need to do is to change the xslt to perform the changes. Here are the steps: 1. You need to change the xslt to the following, so that you can see the raw xml. <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" > <xsl:output method="xml" version="1.0" encoding="UTF-8" indent="yes" /> <xsl:template match="/"> <xmp><xsl:copy-of select="*"/></xmp> </xsl:template> </xsl:stylesheet> a. To do so , you need to go to edit page>>Edit search core results web part >>Display Properties and then untick use Location Visualization b. Open the xslt editor and copy the existing XSLT code to your preferred xslt editor so that you can customise it. c. Now you can paste in the XSLT code above. 2.Perform the search after you have completed step1 and you will see the search results returned in raw xml <All_Results> <Result> <id>1</id> <workid>678</workid> <rank>100000000</rank> <title>Ybbest</title> <author></author> <size>137531</size> <url>http://ybbest</url> <urlEncoded>http%3A%2F%2Fybbest</urlEncoded> <description>Ybbest test site</description> <write>3/17/2012</write> <sitename>http://ybbest</sitename> <collapsingstatus>0</collapsingstatus> <hithighlightedsummary> <c0>Ybbest</c0> test site <ddd /> Add a new image, change this welcome text or add new lists to this page by clicking the edit button above. You can click on Shared Documents to add files or on the <ddd /> </hithighlightedsummary> <hithighlightedproperties> <HHTitle> <c0>Ybbest</c0> </HHTitle> <HHUrl>http://<c0>ybbest</c0></HHUrl> </hithighlightedproperties> <contentclass>STS_Site</contentclass> <isdocument>False</isdocument> <picturethumbnailurl></picturethumbnailurl> <popularsocialtags /> <picturewidth>0</picturewidth> <pictureheight>0</pictureheight> <datepicturetaken></datepicturetaken> <serverredirectedurl></serverredirectedurl> <fileextension></fileextension> <ows_metadatafacetinfo></ows_metadatafacetinfo> <imageurl imageurldescription="SharePoint Site Collection">/_layouts/images/siteicon_16x16.png</imageurl> </Result> <TotalResults>69</TotalResults> <NumberOfResults>50</NumberOfResults> </All_Results> 3. Then you can read what has been returned in the raw xml and start modifying the xslt to customise your search results page. 4.You can also link an external xslt to the web part.It can be set in the Miscellaneous of Web Part section. You can also set it pragmatically using a feature receiver , you can download the source code to do so here. References: http://stackoverflow.com/questions/6548104/change-xslt-of-the-searchresultwebpart-during-the-featureactivated http://www.dotnetmafia.com/blogs/dotnettipoftheday/archive/2010/04/05/a-quick-guide-to-coreresultswebpart-configuration-changes-in-sharepoint-2010.aspx http://www.tonytestasworld.com/post/2011/01/30/HowTo-display-SharePoint-Search-results-as-raw-XML.aspx

    Read the article

  • C++ AMP open specification

    - by Daniel Moth
    Those of you interested in C++ AMP should know that I blog about that topic on our team blog. Just now I posted (and encourage you to go read) our much awaited announcement about the publication of the C++ AMP open specification. For those of you into compiling instead of reading, 3 days ago I posted a list of over a dozen C++ AMP samples. To follow what I and others on my team write about C++ AMP, stay tuned on our RSS feed. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Top 10 Tips & Tricks for Oracle SQL Developer

    - by thatjeffsmith
    Being a short week due to the holiday, and with everyone enjoying their Summer vacations (apologies Southern Hemispherians), I reckoned it was a great time to do one of those lazy recap-Top 10-Reader’s Digest type posts. I’ve been sharing 1-3 tips or ‘tricks’ a week since I started blogging about SQL Developer, and I have more than enough content to write a book. But since I’m lazy, I’m just going to compile a list of my favorite ‘must know’ tips instead. I always have to leave out a few tips when I do my presentations, so now I can refer back to this list to make sure I’m not forgetting anything. So without further ado… 1. Configure Your Preferences Yes, there are a LOT of options. But you don’t need to worry about all of them just yet. I do recommend you take a quick look at these ones in particular. Whether you’re new to the tool or have been using it for 5 years, don’t overlook these settings! 2. Disable Extensions You Aren’t Using If you’re not using Data Miner, or if you’re not working on a Migration – disable those extensions! SQL Developer will run leaner & meaner, plus the user interface will be a bit more simplified making the tool easier to navigate as well. 3. SQL Recall via Keyboard Access your history via the keyboard! Cycle through your recent SQL statements just using these magic key strokes! Ctrl+Up or Ctrl+Down. 4. Format Your Query Output Directly to CSV, XML, HTML, etc Have the query results pre-formatted in the format of your choice! Too lazy to run the Export wizard for your query result sets? Just add the SQL Developer output hints to your statement and have the output auto-magically formatted to the style of your choice! 5. Drag & Drop Multiple Tables to the Worksheet SQL Developer will auto-join the related objects. You can then toggle over to the Query Builder to toggle off the columns you don’t want to query. I guarantee this tip will save you time if you’re joining 3 or more tables! 6. Drag & Drop Multiple Tables to a Relational Model A pretty picture is worth a few dozen DDL scripts? SQL Developer does data modeling! If you ctrl-drag a table to a model, it will take that table and any related tables and reverse engineer them to a relational model! You can then print it out or export it to HTML, PDF, etc. 7. View Your PL/SQL Execution Output Automatically Function returns a refcursor? Procedure had 3 out parameters? When you run these programs via the Procedure Editor, we automatically capture the output and place them into one or more data grids for you to browse. 8. Disable Automatic Code Insight and Use It On-Demand Code Editor – Completion Insight – Enable Completion Auto-Popup (Keyword being Auto) Some folks really don’t like it when their IDEs or word-processors try to do ‘too much’ for them. Thankfully SQL Developer allows you to either increase the delay before it attempts to auto-complete your text OR to disable the automatic bit. Instead, you can invoke it on-demand. 9. Interactive Debugging – Change Your Variable Values as You Step Through Your PLSQL Watches aren’t just for watching. You can actually interact with your programs and ‘see what happens’ when X = 256 instead of 1. 10. Ditch the Tree View for the Schema Browser There’s nothing wrong with the Connection tree for browsing your database objects. But some folks just can’t seem to get comfortable with it. So, we built them a Schema Browser that uses a drop down control instead for changing up your schema and object types. Already Know This Stuff, Want More? Just check out my SQL Developer resource page, it’s one of the main links on the top of this page. Or if you can’t find something, just drop me a note in the form of a comment on this page and I’ll do my best to find it or write it for you.

    Read the article

  • BSON Serialization

    BSON is a binary-encoded serialization of JSON-like documents, which essentially means its an efficient way of transfering information. Part of my work on the MongoDB NoRM drivers, discussed in more details by Rob Conery, is to write an efficient and maintainable BSON serializer and deserializer. The goal of the serializer is that you give it a .NET object and you get a byte array out of it which represents valid BSON. The deserializer does the opposite - give it a byte array and out pops your object....Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Great library of ASP.NET videos – Pluralsight!

    - by hajan
    I have been subscribed to the Pluralsight website and of course since ASP.NET is my favorite development technology, I passed throughout few series of videos related to ASP.NET. You have list of ASP.NET galleries from Fundamentals to Advanced topics including the latest features of ASP.NET 4.0, ASP.NET Ajax, ASP.NET MVC etc. Most of the speakers are either Microsoft MVPs or known technology experts! I was really curious to see the way they have organized the entire course materials, and trust me, I was quite amazed. I saw the ASP.NET 4.0 video series to confirm my knowledge and some other video series regarding general software development concepts, design patterns etc. I would like to point out if anyone of you is interested to get FREE 1-week .NET training pass in the Pluralsight library, please CONTACT ME, write your name and email and include the purpose of the message in the content. I hope you will find this useful. Regards, Hajan

    Read the article

  • What I&rsquo;m working on for this blog&hellip;

    - by marc dekeyser
    Yes it has gone quiet again for the time being! As I am in training for Exchange 2013 and have the need to keep some customers happy (well, we all have to do something to earn our keep ;)) time to write blog posts or even work on my little side projects is limited. So for the time being there are no new blog posts coming but I’d like to tell you that you can expect posts on the following topics: * Automating lab server deployments (Using WDS and MDT 2012 RU1) * Scripts to automate application installations (and integration with the above) * Exchange 2013 posts * Exchange 2013 automation scripts (since I’m already seeing where I could do something here :P) As always, I’m still taking requests…

    Read the article

  • SQL SERVER – LCK_M_XXX – Wait Type – Day 15 of 28

    - by pinaldave
    Locking is a mechanism used by the SQL Server Database Engine to synchronize access by multiple users to the same piece of data, at the same time. In simpler words, it maintains the integrity of data by protecting (or preventing) access to the database object. From Book On-Line: LCK_M_BU Occurs when a task is waiting to acquire a Bulk Update (BU) lock. LCK_M_IS Occurs when a task is waiting to acquire an Intent Shared (IS) lock. LCK_M_IU Occurs when a task is waiting to acquire an Intent Update (IU) lock. LCK_M_IX Occurs when a task is waiting to acquire an Intent Exclusive (IX) lock. LCK_M_S Occurs when a task is waiting to acquire a Shared lock. LCK_M_SCH_M Occurs when a task is waiting to acquire a Schema Modify lock. LCK_M_SCH_S Occurs when a task is waiting to acquire a Schema Share lock. LCK_M_SIU Occurs when a task is waiting to acquire a Shared With Intent Update lock. LCK_M_SIX Occurs when a task is waiting to acquire a Shared With Intent Exclusive lock. LCK_M_U Occurs when a task is waiting to acquire an Update lock. LCK_M_UIX Occurs when a task is waiting to acquire an Update With Intent Exclusive lock. LCK_M_X Occurs when a task is waiting to acquire an Exclusive lock. LCK_M_XXX Explanation: I think the explanation of this wait type is the simplest. When any task is waiting to acquire lock on any resource, this particular wait type occurs. The common reason for the task to be waiting to put lock on the resource is that the resource is already locked and some other operations may be going on within it. This wait also indicates that resources are not available or are occupied at the moment due to some reasons. There is a good chance that the waiting queries start to time out if this wait type is very high. Client application may degrade the performance as well. You can use various methods to find blocking queries: EXEC sp_who2 SQL SERVER – Quickest Way to Identify Blocking Query and Resolution – Dirty Solution DMV – sys.dm_tran_locks DMV – sys.dm_os_waiting_tasks Reducing LCK_M_XXX wait: Check the Explicit Transactions. If transactions are very long, this wait type can start building up because of other waiting transactions. Keep the transactions small. Serialization Isolation can build up this wait type. If that is an acceptable isolation for your business, this wait type may be natural. The default isolation of SQL Server is ‘Read Committed’. One of my clients has changed their isolation to “Read Uncommitted”. I strongly discourage the use of this because this will probably lead to having lots of dirty data in the database. Identify blocking queries mentioned using various methods described above, and then optimize them. Partition can be one of the options to consider because this will allow transactions to execute concurrently on different partitions. If there are runaway queries, use timeout. (Please discuss this solution with your database architect first as timeout can work against you). Check if there is no memory and IO-related issue using the following counters: Checking Memory Related Perfmon Counters SQLServer: Memory Manager\Memory Grants Pending (Consistent higher value than 0-2) SQLServer: Memory Manager\Memory Grants Outstanding (Consistent higher value, Benchmark) SQLServer: Buffer Manager\Buffer Hit Cache Ratio (Higher is better, greater than 90% for usually smooth running system) SQLServer: Buffer Manager\Page Life Expectancy (Consistent lower value than 300 seconds) Memory: Available Mbytes (Information only) Memory: Page Faults/sec (Benchmark only) Memory: Pages/sec (Benchmark only) Checking Disk Related Perfmon Counters Average Disk sec/Read (Consistent higher value than 4-8 millisecond is not good) Average Disk sec/Write (Consistent higher value than 4-8 millisecond is not good) Average Disk Read/Write Queue Length (Consistent higher value than benchmark is not good) Read all the post in the Wait Types and Queue series. Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All the discussion of Wait Stats in this blog is generic and varies from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • CVV Code For Authorize.com using osCommerce

    - by user3567
    Hi I need to add a CVV code for verifying credit cards upon check out on my osCommerece shopping cart. I think this will involve a code for the authorize.net php and the checkout processing php but not sure. Found this great write up, but it is only for the authorize.net php and it doesn't create a filed for the CVV to be keyed. Also it throws an error with the 'echo validate.' Can't seem to find anything in any forums for osCommerce or any place out. Hoping someone here will have some knowledge. Thanks.

    Read the article

  • Extending ASP.NET Output Caching

    One of the most sure-fire ways to improve a web application's performance is to employ caching. Caching takes some expensive operation and stores its results in a quickly accessible location. Since it's inception, ASP.NET has offered two flavors of caching:<ul><li><b>Output Caching</b> - caches the entire rendered markup of an ASP.NET page or <a href="http://www.asp101.com/lessons/usercontrols.asp">User Control</a> for a specified duration.</li><li><b>Data Caching</b> - a API for caching objects. Using the data cache you can write code to add, remove, and retrieve items from the cache.</li></ul>Until recently, the underlying functionality of these two caching mechanisms was fixed - both cached data

    Read the article

  • JPA - insert and retrieve clob and blob types

    - by pachunoori.vinay.kumar(at)oracle.com
    This article describes about the JPA feature for handling clob and blob data types.You will learn the following in this article. @Lob annotation Client code to insert and retrieve the clob/blob types End to End ADFaces application to retrieve the image from database table and display it in web page. Use Case Description Persisting and reading the image from database using JPA clob/blob type. @Lob annotation By default, TopLink JPA assumes that all persistent data can be represented as typical database data types. Use the @Lob annotation with a basic mapping to specify that a persistent property or field should be persisted as a large object to a database-supported large object type. A Lob may be either a binary or character type. TopLink JPA infers the Lob type from the type of the persistent field or property. For string and character-based types, the default is Clob. In all other cases, the default is Blob. Example Below code shows how to use this annotation to specify that persistent field picture should be persisted as a Blob. public class Person implements Serializable {    @Id    @Column(nullable = false, length = 20)    private String name;    @Column(nullable = false)    @Lob    private byte[] picture;    @Column(nullable = false, length = 20) } Client code to insert and retrieve the clob/blob types Reading a image file and inserting to Database table Below client code will read the image from a file and persist to Person table in database.                       Person p=new Person();                      p.setName("Tom");                      p.setSex("male");                      p.setPicture(writtingImage("Image location"));// - c:\images\test.jpg                       sessionEJB.persistPerson(p); //Retrieving the image from Database table and writing to a file                       List<Person> plist=sessionEJB.getPersonFindAll();//                      Person person=(Person)plist.get(0);//get a person object                      retrieveImage(person.getPicture());   //get picture retrieved from Table //Private method to create byte[] from image file  private static byte[] writtingImage(String fileLocation) {      System.out.println("file lication is"+fileLocation);     IOManager manager=new IOManager();        try {           return manager.getBytesFromFile(fileLocation);                    } catch (IOException e) {        }        return null;    } //Private method to read byte[] from database and write to a image file    private static void retrieveImage(byte[] b) {    IOManager manager=new IOManager();        try {            manager.putBytesInFile("c:\\webtest.jpg",b);        } catch (IOException e) {        }    } End to End ADFaces application to retrieve the image from database table and display it in web page. Please find the application in this link. Following are the j2ee components used in the sample application. ADFFaces(jspx page) HttpServlet Class - Will make a call to EJB and retrieve the person object from person table.Read the byte[] and write to response using Outputstream. SessionEJBBean - This is a session facade to make a local call to JPA entities JPA Entity(Person.java) - Person java class with setter and getter method annotated with @Lob representing the clob/blob types for picture field.

    Read the article

  • On Contract Employment

    - by kerry
    I am going to post about something I don’t post about a lot, the business side of development.  Scott at the antipimp does a good job of explaining how contracts work from a business perspective.  I am going to give a view from the ground. First, a little background on myself.  I have recently taken a 6 month contract after about 8 years of fulltime employment.  I have 2 kids, and a stay at home wife.  I took this contract opportunity because I wanted to try it on for size.  I have always wondered whether I would like doing contracts over fulltime employment.  So, in keeping with the theme of this blog I will write this down now so that I may reference it later. ALL jobs are temporary! Right now you may not realize it, most people simply ignore it, but EVERY job is temporary.  Everyone should be planning for life after the money stops coming in.  Sadly, most people do not.  Contracting pushes this issue to the forefront, making you deal with it.  After a month on a contract, I am happy to say that I am saving more than I ever saved in a fulltime position.  Hopefully, I will be ready in case of an extended window of unemployment between contracts. Networking I find it extremely gratifying getting to know people.  It is especially beneficial when moving to a new city.  What better way to go out and meet people in your field than to work a few contracts?  6 months of working beside someone and you get to know them pretty well.  This is one of my favorite aspects. Technical Agility Moving between IS shops takes (or molds you into) a flexible person.  You have to be able to go in and hit the ground running.  This means you need to be able to sit down and start work on a large codebase working in a language that you may or may not have that much experience in.  It is also an excellent way to learn new languages and broaden your technical skill set.  I took my current position to learn Ruby.  A month ago, I had only used it in passing, but now I am using it every day.  It’s a tragedy in this field when people start coding for the joy and love of coding, then become deeply entrenched in their companies methods and technologies that it becomes a just a job. Less Stress I am not talking about the kind of stress you get from a jackass boss.  I am talking about the kind of stress I (or others) experience about planning and future proofing your code.  Not saying I stay up at night worrying whether we have done it right, if that code I wrote today is going to bite me later, but it still creeps around in the dark recesses of my mind.  Careful though, I am not suggesting you write sloppy code; just defer any large architectural or design decisions to the ‘code owners’. Flexible Scheduling It makes me very happy to be able to cut out a few hours early on a Friday (provided the work is done) and start the weekend off early by going to the pool, or taking the kids to the park.  Contracting provides you this opportunity (mileage may vary).  Most of your fulltime brethren will not care, they will be jealous that they’re corporate policy prevents them from doing the same.  However, you must be mindful of situations where this is not appropriate, and don’t over do it.  You are there to work after all. Affirmation of Need Have you ever been stuck in a job where you thought you were underpaid?  Have you ever been in a position where you felt like there was not enough workload for you?  This is not a problem for contractors.  When you start a contract it is understood that you are needed, and the employer knows that you are happy with the terms. Contracting may not be for everyone.  But, if you develop a relationship with a good consulting firm, keep their clients happy, then they will keep you happy.  They want you to work almost as much as you do.  Just be sure and plan financially for any windows of unemployment.

    Read the article

  • Why does Ubuntu refuse to execute files from an NTFS partition?

    - by Ivan
    I mount an NTFS partition (where I've got some Linux binaries and scripts alongside with Win32 and data files) with the following fstab line: /dev/sda5 /mnt/dat ntfs-3g rw,dev,exec,auto,async,users,umask=000,uid=1000,gid=1000,locale=en_US.utf8, errors=remount-ro 0 0 All files seem to have executable attribute set then, but if I try to actually execute them, I get "Permission denied" error. Even with sudo. Even while execute (as well as read and write) permissions are granted to everyone and all the files owner is set to the user. So how do I set the system up to be able to run Linux binaries from NTFS?

    Read the article

  • My fresh installed ubuntu 12,4 won´t start without live usb drive

    - by Alexander Neira
    I just installed a fresh copy of ubuntu 12,04 in my netbook, through a live USB drive. I used the whole HDD and erased an existent win 7 partition. It installed everything and then asked me to reboot. When I did that, after rebooting, it only appeared the writer marker on top of a black screen and only that. Then, i plugged in again my usb drive and reboot. It loaded, but in the loading screen sent me (once) an error about the hard drive, that some file was missing (sorry i coulnd't write down the exact message). After that, I tried to re install ubuntu using my USB drive, but it sends me inmediately to the login screen. How do I solve this?

    Read the article

  • Deck from London UG 20110616 - Building a Reporting Brick capable of 1.2GBytes/sec and 80K IOs/sec for less than £2K

    - by tonyrogerson
    The Reporting Brick concept is not really anything new, it starts the walk toward bringing the work Jim Gray and Tom Barclay et al did on CyberBricks up-to-date in terms of current kit. A reporting brick is simply a box built from commodity kit utilising commodity SSD, namely the OCZ IBIS drives to gain extremely high levels of performance for a fraction of the cost required for typical server and san installs today. I'll write up over the next few months as I work further on the concept, for now the deck attached summarises some of the ideas around it, the deck was presented at last nights London SQL Server User Group, I will be presenting it again in Edinburgh on the 29th June and other locations later in the year. Deck: Commodity Kit.pptx  

    Read the article

  • How was your experience working as a game tester?

    - by MrDatabase
    I'm currently an independent game developer. I'm open to the idea of working on a team in the game industry. I'm under the impression that being a "game tester" is a relatively easy way to get a job... however that job may be somewhat undesirable. So how was your experience working as a tester in the game industry? Some interesting experiences could include: Did the game tester position lead to other more desirable positions? How were the relationships between testers and developers? Did you write any code? (test "frameworks", unit tests etc) If bugs made it into production was any (potentially unfair) blame put on the testers?

    Read the article

  • Part 14: Execute a PowerShell script

    In the series the following parts have been published Part 1: Introduction Part 2: Add arguments and variables Part 3: Use more complex arguments Part 4: Create your own activity Part 5: Increase AssemblyVersion Part 6: Use custom type for an argument Part 7: How is the custom assembly found Part 8: Send information to the build log Part 9: Impersonate activities (run under other credentials) Part 10: Include Version Number in the Build Number Part 11: Speed up opening my build process template Part 12: How to debug my custom activities Part 13: Get control over the Build Output Part 14: Execute a PowerShell script Part 15: Fail a build based on the exit code of a console application With PowerShell you can add powerful scripting to your build to for example execute a deployment. If you want more information on PowerShell, please refer to http://technet.microsoft.com/en-us/library/aa973757.aspx For this example we will create a simple PowerShell script that prints “Hello world!”. To create the script, create a new text file and name it “HelloWorld.ps1”. Add to the contents of the script: Write-Host “Hello World!” To test the script do the following: Open the command prompt To run the script you must change the execution policy. To do this execute in the command prompt: powershell set-executionpolicy remotesigned Now go to the directory where you have saved the PowerShell script Execute the following command powershell .\HelloWorld.ps1 In this example I use a relative path, but when the path to the PowerShell script contains spaces, you need to change the syntax to powershell "& '<full path to script>' " for example: powershell "& ‘C:\sources\Build Customization\SolutionToBuild\PowerShell Scripts\HellloWorld.ps1’ " In this blog post, I create a new solution and that solution includes also this PowerShell script. I want to create an argument on the Build Process Template that holds the path to the PowerShell script. In the Build Process Template I will add an InvokeProcess activity to execute the PowerShell command. This InvokeProcess activity needs the location of the script as an argument for the PowerShell command. Since you don’t know the full path at the build server of this script, you can either specify in the argument the relative path of the script, but it is hard to find out what the relative path is. I prefer to specify the location of the script in source control and then convert that server path to a local path. To do this conversion you can use the ConvertWorkspaceItem activity. So to complete the task, open the Build Process Template CustomTemplate.xaml that we created in earlier parts, follow the following steps Add a new argument called “DeploymentScript” and set the appropriate settings in the metadata. See Part 2: Add arguments and variables  for more information. Scroll down beneath the TryCatch activity called “Try Compile, Test, and Associate Changesets and Work Items” Add a new If activity and set the condition to "Not String.IsNullOrEmpty(DeploymentScript)" to ensure it will only run when the argument is passed. Add in the Then branch of the If activity a new Sequence activity and rename it to “Start deployment” Click on the activity and add a new variable called DeploymentScriptFilename (scoped to the “Start deployment” Sequence Add a ConvertWorkspaceItem activity on the “Start deployment” Sequence Add a InvokeProcess activity beneath the ConvertWorkspaceItem activity in the “Start deployment” Sequence Click on the ConvertWorkspaceItem activity and change the properties DisplayName = Convert deployment script filename Input = DeploymentScript Result = DeploymentScriptFilename Workspace = Workspace Click on the InvokeProcess activity and change the properties Arguments = String.Format(" ""& '{0}' "" ", DeploymentScriptFilename) DisplayName = Execute deployment script FileName = "PowerShell" To see results from the powershell command drop a WriteBuildMessage activity on the "Handle Standard Output" and pass the stdOutput variable to the Message property. Do the same for a WriteBuildError activity on the "Handle Error Output" To publish it, check in the Build Process Template This leads to the following result We now go to the build definition that depends on the template and set the path of the deployment script to the server path to the HelloWorld.ps1. (If you want to see the result of the PowerShell script, change the Logging verbosity to Detailed or Diagnostic). Save and run the build. A lot of the deployment scripts you have will have some kind of arguments (like username / password or environment variables) that you want to define in the Build Definition. To make the PowerShell configurable, you can follow the following steps. Create a new script and give it the name "HelloWho.ps1". In the contents of the file add the following lines: param (         $person     ) $message = [System.String]::Format(“Hello {0}!", $person) Write-Host $message When you now run the script on the command prompt, you will see the following So lets change the Build Process Template to accept one parameter for the deployment script. You can of course make it configurable to add a for-loop that reads through a collection of parameters but that is out of scope of this blog post. Add a new Argument called DeploymentScriptParameter In the InvokeProcess activity where the PowerShell command is executed, modify the Arguments property to String.Format(" ""& '{0}' '{1}' "" ", DeploymentScriptFilename, DeploymentScriptParameter) Check in the Build Process Template Now modify the build definition and set the Parameter of the deployment to any value and run the build. You can download the full solution at BuildProcess.zip. It will include the sources of every part and will continue to evolve.

    Read the article

  • How to train yourself to avoid writing “clever” code?

    - by Dan Abramov
    Do you know that feeling when you just need to show off that new trick with Expressions or generalize three different procedures? This does not have to be on Architecture Astronaut scale and in fact may be helpful but I can't help but notice someone else would implement the same class or package in a more clear, straightforward (and sometimes boring) manner. I noticed I often design programs by oversolving the problem, sometimes deliberately and sometimes out of boredom. In either case, I usually honestly believe my solution is crystal clear and elegant, until I see evidence to the contrary but it's usually too late. There is also a part of me that prefers undocumented assumptions to code duplication, and cleverness to simplicity. What can I do to resist the urge to write “cleverish” code and when should the bell ring that I am Doing It Wrong? The problem is getting even more pushing as I'm now working with a team of experienced developers, and sometimes my attempts at writing smart code seem foolish even to myself after time dispels the illusion of elegance.

    Read the article

  • Browsing Your ADF Application Module Pooling Params with WLST

    - by Duncan Mills
    In ADF 11g you can of course use Enterprise Manager (EM) to browse and configure the settings used by ADF Business Components  Application Modules, as shown here for one of my sample deployed applications. This screen you can access from the EM homepage by pulling down the Application Deployment menu, and then ADF > Configure ADF Business Components. Then select the profile that you are actually using (Hint: look in the DataBindings.cpx file to work this out - probably the "Local" version unless you've explicitly changed it. )So, from this screen you can change the pooling parameters and the world is good. But what if you don't have EM installed? In that case you can use the WebLogic scripting capabilities to view (and Update) the MBean Properties. Explanation The pooling parameters and many others are handled through Message Driven Beans that are created for the deployed application in the server. In the case of the ADF BC pooling parameters, this MBean will combine the configuration deployed as part of the application, along with any overrides defined as -D environement commands on the JVM startup for the application server instance. Using WLST to Browse the Bean ValuesFor our purposes here I'm doing this interactively, although you can also write a script or write Java to achieve the same thing.Step 0: Before You Start You will need the followingAccess to the console on the machine that is running the serverThe WebLogic Admin username and password (I'll use weblogic/password as my example here - yours will be different)The name of the deployed application (in this example FMWdh_application1)The package path to the bc4j.xcfg file (in this example oracle.demo.fmwdh.model.service.common.bc4j.xcfg) This is based on the default path for your model project so it shoudl be fairly easy to work out.The BC configuration your AM is actually running with (look in the DataBindings.cpx for that. In this example DealHelpServiceDeployed is the profile being used..)Step 1: Start the WLST consoleTo start at the beginning, you need to run the WLST command but that needs a little setup:Change to the wlserver_10.3/server/bin directory e.g. under your Fusion Middleware Home[oracle@mymachine] cd /home/oracle/FMW_R1/wlserver_10.3/server/binSet your environment using the setWLSEnv script. e.g. on Oracle Enterprise Linux:[oracle@mymachine bin] source setWLSEnv.shStart the WLST interactive console[oracle@mymachine bin] java weblogic.WLSTInitializing WebLogic Scripting Tool (WLST) ...Welcome to WebLogic Server Administration Scripting ShellType help() for help on available commandswls:/offline> Step 2:Enter the WLST commandsConnect to the server wls:> connect('weblogic','password')Change to the Custom root, this is where the AMPooling MBeans are registered wls:> custom()Change to the b4j MBean directorywls:> cd ('oracle.bc4j.mbean.config')Work out the correct directory for the AM configuration you need. This is the difficult bit, not because it's hard to do, but because the names are long. The structure here is such that every child MBean is displayed at the same level as the parent, so for each deployed application there will be many directories shown. In fact, do an ls() command here and you'll see what I mean. Each application will have one MBean for the app as a whole, and then for each deployed configuration in the .xcfg file you'll see: One for the config entry itself, and then one each for Security, DB Connection and AM Pooling. So if you deploy an app with just one configuration you'll see 5 directories, if it has two configurations in the .xcfg you'll see 9 and so on.The directory you are looking for will contain those bits of information you gathered in Step 0, specifically the Application Name, the configuration you are using and the xcfg name: First of all narrow your list to just those directories returned from the ls() command that begin oracle.bc4j.mbean.config:name=AMPool. These identify the AM pooling MBeans for all the deployed applications. Now look for the correct application name e.g. Application=FMWdh_application1The config setting in that sub-list should already be correct and match what you expect e.g. oracle.bc4j.mbean.config=oracle.demo.fmwdh.model.service.common.bc4j.xcfgFinally look for the correct value for the AppModuleConfigType e.g. oracle.bc4j.mbean.config.AppModuleConfigType=DealHelpServiceDeployedNow you have identified the correct directory name, change to that (keep the name on one line of course - I've had to split it across lines here for clarity:wls:> cd ('oracle.bc4j.mbean.config:name=AMPool,     type=oracle.bc4j.mbean.config.AppModuleConfigType.AMPoolType,    oracle.bc4j.mbean.config=oracle.demo.fmwdh.model.service.common.bc4j.xcfg,    Application=FMWdh_application1,    oracle.bc4j.mbean.config.AppModuleConfigType=DealHelpServiceDeployed') Now you can actually view the parameter values with a simple ls() commandwls:> ls()And here's the output in which you can view the realtime values of the various pool settings: -rw- AmpoolConnectionstrategyclass oracle.jbo.common.ampool.DefaultConnectionStrategy -rw- AmpoolDoampooling true -rw- AmpoolDynamicjdbccredentials false -rw- AmpoolInitpoolsize 2 -rw- AmpoolIsuseexclusive true -rw- AmpoolMaxavailablesize 40 -rw- AmpoolMaxinactiveage 600000 -rw- AmpoolMaxpoolsize 4096 -rw- AmpoolMinavailablesize 2 -rw- AmpoolMonitorsleepinterval 600000 -rw- AmpoolResetnontransactionalstate true -rw- AmpoolSessioncookiefactoryclass oracle.jbo.common.ampool.DefaultSessionCookieFactory -rw- AmpoolTimetolive 3600000 -rw- AmpoolWritecookietoclient false -r-- ConfigMBean true -rw- ConnectionPoolManager oracle.jbo.server.ConnectionPoolManagerImpl -rw- Doconnectionpooling false -rw- Dofailover false -rw- Initpoolsize 0 -rw- Maxpoolcookieage -1 -rw- Maxpoolsize 4096 -rw- Poolmaxavailablesize 25 -rw- Poolmaxinactiveage 600000 -rw- Poolminavailablesize 5 -rw- Poolmonitorsleepinterval 600000 -rw- Poolrequesttimeout 30000 -rw- Pooltimetolive -1 -r-- ReadOnly false -rw- Recyclethreshold 10 -r-- RestartNeeded false -r-- SystemMBean false -r-- eventProvider true -r-- eventTypes java.lang.String[jmx.attribute.change] -r-- objectName oracle.bc4j.mbean.config:name=AMPool,type=oracle.bc4j.mbean.config.AppModuleConfigType.AMPoolType,oracle.bc4j.mbean.config=oracle.demo.fmwdh.model.service.common.bc4j.xcfg,Application=FMWdh_application1,oracle.bc4j.mbean.config.AppModuleConfigType=DealHelpServiceDeployed -rw- poolClassName oracle.jbo.common.ampool.ApplicationPoolImpl Thanks to Brian Fry on the JDeveloper PM Team who did most of the work to put this sequence of steps together with me badgering him over his shoulder.

    Read the article

  • OTN Developer Day: Oracle Database 11g Application Development

    - by stephen.garth
    When and Where: Tuesday June 15, 2010 from 8:00 am - 5:30 pm Hyatt Regency Reston, Reston VA This full-day FREE event offers a great learning and networking opportunity. With support from Oracle database application development experts, you'll get valuable hands-on experience developing database-backed apps with the latest Oracle tools and frameworks. Oh yeah, you get to use your own notebook and download some cool and very useful materials. Find out more and register here. var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); try { var pageTracker = _gat._getTracker("UA-13185312-1"); pageTracker._trackPageview(); } catch(err) {}

    Read the article

  • An OLAP client!

    - by Davide Mauri
    While surfing CodePlex I’ve come across a very interesting tool for all BI Developers who misses a decent OLAP client where to write, run & test MDX queries http://ranetuilibraryolap.codeplex.com/ I’ve not tested it yet, but I’ll surely do this week and I’ll post my impressions ASAP. The first impression, just looking the CodePlex page, is that tool Rocks!!!!! Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Sed problem in a Bash script

    - by moata_u
    Hello there. I'm having a problem using the sed command . I'm trying to write a bash script that does the following : search for the line that contain :@ then save the line that contained :@ and replace it with new line as in the following: #! /bin/bash echo "Please enter the ip address of you file"<br> read ipnumber<br> find=`grep ':@' application.properties` # find the line<br> input="connection.url=jdbc\racle\:thin\:@$ipnumber\:1521\:billz" # preparing new line<br> echo `sed "s/'${find}'/'${input}'/g" application.properties` # replace old with new line <br> The problem is: nothing happens. I've already tried to use "${find}" instead of '${find}'

    Read the article

  • Guru Of the Week n° 43 : copie sur écriture - première partie, un article de Herb Sutter traduit par la rédaction C++

    L'idiome "copie sur écriture" (aussi connu sous les noms "copy-on-write", "COW" ou "implicite sharing") est une technique de programmation (qui devrait être) bien connue des développeurs utilisant Qt. Cette technique peut éviter les copies inutiles de gros objets (comme QString ou QVector), en réalisant la copie uniquement lors de la première modification d'un objet. Dans cet article, Herb Sutter détaille quelques implémentations possibles et comparer leurs performances respectives. Guru Of the Week n° 43 : copie sur écriture - première partie

    Read the article

  • Web-based CMS for mobile app

    - by JWood
    I'm just about to start developing a mobile app which needs to be fed from a CMS. I started designing the tables when I thought there must be something out there which could save me a load of time and let me concentrate on the mobile side of things. So, I'm looking for a CMS that will let me create hierarchical "pages" which will just be 4-5 database fields with a simple front-end to allow to edit and update them. I don't mind having to write some code to layout the database and forms etc, any saving on starting from scratch would be good. The only requirement is that I be able to access the data via some sort of web service, REST, JSON, XML, anything really... Can anyone suggest anything that might help? Thanks, J

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #005

    - by pinaldave
    Here is the list of curetted articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2006 SQL SERVER – Cursor to Kill All Process in Database I indeed wrote this cursor and when I often look back, I wonder how naive I was to write this. The reason for writing this cursor was to free up my database from any existing connection so I can do database operation. This worked fine but there can be a potentially big issue if there was any important transaction was killed by this process. There is another way to to achieve the same thing where we can use ALTER syntax to take database in single user mode. Read more about that over here and here. 2007 Rules of Third Normal Form and Normalization Advantage – 3NF The rules of 3NF are mentioned here Make a separate table for each set of related attributes, and give each table a primary key. If an attribute depends on only part of a multi-valued key, remove it to a separate table If attributes do not contribute to a description of the key, remove them to a separate table. Correct Syntax for Stored Procedure SP Sometime a simple question is the most important question. I often see in industry incorrectly written Stored Procedure. Few writes code after the most outer BEGIN…END and few writes code after the GO Statement. In this brief blog post, I have attempted to explain the same. 2008 Switch Between Result Pan and Query Pan – SQL Shortcut Many times when I am writing query I have to scroll the result displayed in the result set. Most of the developer uses the mouse to switch between and Query Pane and Result Pane. There are few developers who are crazy about Keyboard shortcuts. F6 is the keyword which can be used to switch between query pane and tabs of the result pane. Interesting Observation – Use of Index and Execution Plan Query Optimization is a complex game and it has its own rules. From the example in the article we have discovered that Query Optimizer does not use clustered index to retrieve data, sometime non clustered index provides optimal performance for retrieving Primary Key. When all the rows and columns are selected Primary Key should be used to select data as it provides optimal performance. 2009 Interesting Observation – TOP 100 PERCENT and ORDER BY If you pull up any application or system where there are more than 100 SQL Server Views are created – I am very confident that at one or two places you will notice the scenario wherein View the ORDER BY clause is used with TOP 100 PERCENT. SQL Server 2008 VIEW with ORDER BY clause does not throw an error; moreover, it does not acknowledge the presence of it as well. In this article we have taken three perfect examples and demonstrated which clause we should use when. Comma Separated Values (CSV) from Table Column A Very common question – How to create comma separated values from a table in the database? The answer is also very common if we use XML. Check out this article for quick learning on the same subject. Azure Start Guide – Step by Step Installation Guide Though Azure portal has changed a quite bit since I wrote this article, the concept used in this article are not old. They are still valid and many of the functions are still working as mentioned in the article. I believe this one article will put you on the track to use Azure! Size of Index Table for Each Index – Solution Earlier I have posted a small question on this blog and requested help from readers to participate here and provide a solution. The puzzle was to write a query that will return the size for each index that is on any particular table. We need a query that will return an additional column in the above listed query and it should contain the size of the index. This article presents two of the best solutions from the puzzle. 2010 Well, this week in 2010 was the week of puzzles as I posted three interesting puzzles. Till today I am noticing pretty good interesting in the puzzles. They are tricky but for sure brings a great value if you are a database developer for a long time. I suggest you go over this puzzles and their answers. Did you really know all of the answers? I am confident that reading following three blog post will for sure help you enhance the experience with T-SQL. SQL SERVER – Challenge – Puzzle – Usage of FAST Hint SQL SERVER – Puzzle – Challenge – Error While Converting Money to Decimal SQL SERVER – Challenge – Puzzle – Why does RIGHT JOIN Exists 2011 DVM sys.dm_os_sys_info Column Name Changed in SQL Server 2012 Have you ever faced a situation where something does not work? When you try to fix it - you enjoy fixing it and started to appreciate the breaking changes. Well, this was exactly I felt yesterday. Before I begin my story, I want to candidly state that I do not encourage anybody to use * in the SELECT statement. Now the disclaimer is over – I suggest you read the original story – you will love it! Get Directory Structure using Extended Stored Procedure xp_dirtree Here is the question to you – why would you do something in SQL Server where you can do the same task in command prompt much easily. Well, the answer is sometime there are real use cases when we have to do such thing. This is a similar example where I have demonstrated how in SQL Server 2012 we can use extended stored procedure to retrieve directory structure. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

< Previous Page | 278 279 280 281 282 283 284 285 286 287 288 289  | Next Page >