Daily Archives

Articles indexed Tuesday November 15 2011

Page 10/16 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16  | Next Page >

  • jQuery + Not Selector

    - by Andrej
    Being a newbie to Javascript and jQuery, I am not sure whether this is not possible at all or I am just doing a completely dumb mistake: I am trying to hide a div on any click outside the div itself. I have simplified the code for demonstration purpose: HTML <html> <div class="tooltip"> Bla Bla Bla </div> </html> jQuery $('html:not(.tooltip)').click(function() { $('.tooltip').hide(); }); Demo http://jsfiddle.net/yCx6F/1/

    Read the article

  • High PageIOLatch_SH Waits with High Drive Idle times

    - by Marty Trenouth
    We are experiencing high volume of PageIOLatch_SH waits on our database (row counts in the Billions). However it seems that our drive Idle time Percentage hovers around 50-60 percent. CPU usage is nill. The Database Tuning Advisor gives no suggestions for optimization. The query plan (actual) from the single stored procedure used on the database puts the majority of the expense on index seek (yeah I know these should be optimial) operations. Anyone have suggestions of how to increase throughput?

    Read the article

  • Implement python replace() function without using regexp

    - by jwesonga
    I'm trying to rewrite the equivalent of the python replace() function without using regexp. Using this code, i've managed to get it to work with single chars, but not with more than one character: def Replacer(self, find_char, replace_char): s = [] for char in self.base_string: if char == find_char: char = replace_char #print char s.append(char) s = ''.join(s) my_string.Replacer('a','E') Anybody have any pointers how to make this work with more than one character? example: my_string.Replacer('kl', 'lll')

    Read the article

  • Using target-specific variable in makefile

    - by James Johnston
    I have the following makefile: OUTPUTDIR = build all: v12target v13target v12target: INTDIR = v12 v12target: DoV12.avrcommontargets v13target: INTDIR = v13 v13target: DoV13.avrcommontargets %.avrcommontargets: $(OUTPUTDIR)/%.elf @true $(OUTPUTDIR)/%.elf: $(OUTPUTDIR)/$(INTDIR)/main.o @echo TODO build ELF file from object file: destination $@, source $^ @echo Compiled elf file for $(INTDIR) > $@ $(OUTPUTDIR)/$(INTDIR)/%.o: %.c @echo TODO call GCC to compile C file: destination $@, source $< @echo Compiled object file for $<, revision $(INTDIR) > $@ $(shell rm -rf $(OUTPUTDIR)) $(shell mkdir -p $(OUTPUTDIR)/v12 2> /dev/null) $(shell mkdir -p $(OUTPUTDIR)/v13 2> /dev/null) .SECONDARY: The idea is that there are several different code configurations that need to be compiled from the same source code. The "all" target depends on v12target and v13 target, which set a number of variables for that particular build. It also depends on an "avrcommontargets" pattern, which defines how to actually do the compiling. avrcommontargets then depends on the ELF file, which in turn depends on object files, which are built from the C source code. Each compiled C file results in an object file (*.o). Since each configuration (v12, v13, etc.) results in a different output, the C file needs to be built several times with the output placed in different subdirectories. For example, "build/v12/main.o", "build/v13/main.o", etc. Sample output: TODO call GCC to compile C file: destination build//main.o, source main.c TODO build ELF file from object file: destination build/DoV12.elf, source build//main.o TODO build ELF file from object file: destination build/DoV13.elf, source build//main.o The problem is that the object file isn't going into the correct subdirectory. For example, "build//main.o" instead of "build/v12/main.o". That then prevents the main.o from being correctly rebuilt to generate the v13 version of main.o. I'm guessing the issue is that $(INTDIR) is a target specific variable, and perhaps this can't be used in the pattern targets I defined for %.elf and %.o. The correct output would be: TODO call GCC to compile C file: destination build/v12/main.o, source main.c TODO build ELF file from object file: destination build/DoV12.elf, source build/v12/main.o TODO call GCC to compile C file: destination build/v13/main.o, source main.c TODO build ELF file from object file: destination build/DoV13.elf, source build/v13/main.o What do I need to do to adjust this makefile so that it generates the correct output?

    Read the article

  • SOLVED: Lisp: macro calling a function works in interpreter, fails in compiler (SBCL + CMUCL)

    - by ttsiodras
    As suggested in a macro-related question I recently posted to SO, I coded a macro called "fast" via a call to a function (here is the standalone code in pastebin): (defun main () (progn (format t "~A~%" (+ 1 2 (* 3 4) (+ 5 (- 8 6)))) (format t "~A~%" (fast (+ 1 2 (* 3 4) (+ 5 (- 8 6))))))) This works in the REPL, under both SBCL and CMUCL: $ sbcl This is SBCL 1.0.52, an implementation of ANSI Common Lisp. ... * (load "bug.cl") 22 22 $ Unfortunately, however, the code no longer compiles: $ sbcl This is SBCL 1.0.52, an implementation of ANSI Common Lisp. ... * (compile-file "bug.cl") ... ; during macroexpansion of (FAST (+ 1 2 ...)). Use *BREAK-ON-SIGNALS* to ; intercept: ; ; The function COMMON-LISP-USER::CLONE is undefined. So it seems that by having my macro "fast" call functions ("clone","operation-p") at compile-time, I trigger issues in Lisp compilers (verified in both CMUCL and SBCL). Any ideas on what I am doing wrong and/or how to fix this?

    Read the article

  • Eclipse: Double semi-colon on an import

    - by smp7d
    Using Eclipse, if I have an extra semicolon on an import line (not the last import line), I see a syntax error in the IDE. However, this compiles fine outside of the IDE (Maven in this case). Example: import java.util.ArrayList;; //notice extra semicolon import java.util.List; Does anyone else see this behavior? Why is this showing as a syntax error? I am working with someone who keeps pushing these this to source control and it is irritating me (they clearly aren't using Eclipse). Full disclosure... I am using SpringSource Tool Suite 2.8.0.

    Read the article

  • How to get current directory in C# for any Windows version?

    - by DarthRoman
    I am creating an application that can be executed in any windows version, even in mobile's, and I am trying to get the current directory of the executable. The problem is that if I use the following code, it doesn't compile in Windows Mobile: string currentDirectory = System.IO.Path.GetDirectoryName(System.Windows.Forms.Application.ExecutablePath); And if I use this code, I receive something like: file:\C:\xxx string currentDirectory = System.IO.Path.GetDirectoryName(System.Windows.Forms.Application.ExecutablePath); Although, I need to get the root drive, and this code doesn't compile in Windows Mobile: String rootPath = Path.GetPathRoot(Environment.SystemDirectory); Does anyone know how to get the current directory of the application and the root path for any windows version, even mobile's?

    Read the article

  • Pre-populating Radio Buttons with Java

    - by Gary Deuces Rozanski
    Is it possible to pre-populate a radio button, using jsp, depending on the value in the database? If so, how? I have done research here at StackOverflow & Google but with no real solution. p.s. I hope somebody can help me out with a question. As you will see from previous questions, I am not a developer, but get loaded with questions being the most technical on my team. oh the joy. Any help will be appreciated & I apologize in advance if my question is dumb.

    Read the article

  • Python: How do I pass a variable by reference?

    - by David Sykes
    The Python documentation seems unclear about whether parameters are passed by reference or value, and the following code produces the unchanged value 'Original' class PassByReference: def __init__(self): self.variable = 'Original' self.Change(self.variable) print self.variable def Change(self, var): var = 'Changed' Is there something I can do to pass the variable by actual reference? Update: I am coming to the conclusion that while Andrea answered my actual question (Can you... No but you can...), on the subject of pass by reference Blair Conrad is more technically correct. As I understand it the crux is that a copy of a reference is being passed. If you assign that copy, as in my example, then you lose the reference to the original and it remains unchanged. If, however, you 'use' that reference, for example append on a passed list, then the original is changed. I will see how the comments and votes go before choosing the answer people think is the best

    Read the article

  • Animation on single UIWebView

    - by Shri
    I have searched quite a bit(sample one, Sample two) but cant seem to get the correct answer. I have a webview on my XYZ view controller class. When i press on a button, it takes an URL from an array and re-loads the same webview. Now I need to do the flip book animation on it. Same webview should be reloaded. That is the webview should rotate around its own axis 180/360 degree while the loading of next url is going on. Is this possible?

    Read the article

  • MatheMagics - Guess My Age - Method 2

    - by PointsToShare
    © 2011 By: Dov Trietsch. All rights reserved MatheMagic – Guess My Age – Method 2 The Mathemagician stands on the stage and asks an adult to do the following: ·         Do the next few steps on your calculator, or the calculator in your phone, or even on a piece of paper. ·         Do it silently! Don’t tell me the results until I ask for them directly ·         Multiply your age by 2. ·         Add 7 to the result ·         Multiply the result by 5. ·         Tell me the result. I will nonetheless immediately tell you what your age is. How do I do this? Let’s do the algebra. Let A denote your age (2A + 7) 5 = 10A + 35 so it is of the 3 digit form XY5 Now make two numbers out of the result - The last digit and the number before it. The Last digit is obviously 5, the other 2 (or 3 for a centenarian) and this number is the age + 3. Example: I am 76 years old and here is what happens when I do the steps 76 x 2 = 152 152 + 7 = 159 159 x 5 = 795 This is made of 79 and 5. And … 79 – 3 = 76 A note to the socially aware mathemagician – it is safer to do it with a man. The chances of a veracious answer are much, much higher! The trick may be accomplished on any 2 or 3 digit number, not just one’s age, but if you want to know your date’s age, it’s a good way to elicit it. That’s All Folks PS for more Ageless “Age” mathemagics go to www.mgsltns.com/games.htm and also here: http://geekswithblogs.net/PointsToShare/archive/2011/11/15/mathemagics---guess-my-age-method-1.aspx

    Read the article

  • MatheMagics - Guess My Age Method 1

    - by PointsToShare
    © 2011 By: Dov Trietsch. All rights reserved MatheMagic – Guess My Age – Method 1 The Mathemagician stands on the stage and asks an adult to do the following: ·         Do the next few steps on your calculator, or the calculator in your phone, or even on a piece of paper. ·         Do it silently! Don’t tell me the results until I ask for them directly ·         Compute a single digit multiple of 9 – any one of 9, 18, 27, … all the way to 81, will do. ·         Now multiply your age by 10 ·         Subtract the 9 multiple from this number. ·         Tell me the result. Notice that I don’t know which multiple of 9 you subtracted from 10 times your age. I will nonetheless immediately tell you what your age is. How do I do this? Let’s do the algebra. 10X – 9Y = 10X – 10Y + Y = 10(X – Y) + Y Now remember, you asked an adult, so his/her age is a two digit number (maybe even 3 digits), thus reducing it by the single digit multiplied by nine is still positive – the lowest is can be is 100 – 81 which yields 19. Now make two numbers out of the result. The last digit and the number before it. This number is X – Y or the age minus the single digit you selected. The last digit is this very single digit. This is always so regardless of the digit you selected. So… Add tis digit to the other number and you get back the age! Q.E.D Example: I am 76 years old and here is what happens when I do the steps 76 x 10 = 760 760 – 18 = 742 made of 74 and 2. My age is 74 + 2 760 – 81 = 679 made of 67 and 9. My age is 67 + 9 A note to the socially aware mathemagician – it is safer to do it with a man. The chances of a veracious answer are much, much higher! The trick may be accomplished on any 2 or 3 digit number, not just one’s age, but if you want to know your date’s age, it’s a good way to elicit it. That’s All Folks PS for more Ageless “Age” mathemagics go to www.mgsltns.com/games.htm and also here: http://geekswithblogs.net/PointsToShare/archive/2011/11/15/mathemagics---guess-my-age---method-2.aspx

    Read the article

  • South Florida Stony Brook Alumni &amp; Friends Reception 2011

    - by Sam Abraham
    It’s official, we are kicking off a local South Florida Chapter for Stony Brook alumni and friends in the area to keep in touch.  Our first networking event will be taking place at Champps, Ft Lauderdale on November 17th, 6:00-8:00 PM. Admission is free and open for everyone, whether or not they are Stony Brook Alums. The team at Champps is offering us great specials (Happy hour deals, half-price appetizers,etc.) that we can choose to enjoy while we network and catch up. (Event Announcement: http://alumniandfriends.stonybrook.edu/page.aspx?pid=299&cid=1&ceid=171&cerid=0&cdt=11%2f17%2f2011) I look forward to share and revive my college experience which I believe was the starting line of my ongoing life journey. It would be also great to hear others’ take as they reflect on their experiences throughout their college years. I invite anyone interested in keeping in touch with friends and alums of Stony Brook to join our LinkedIn or Facebook groups.   The Stony Brook Alumni Association – South Florida Chapter LinkedIn Group: http://www.linkedin.com/groups?gid=3665306&trk=myg_ugrp_ovr The Stony Brook Alumni Association – South Florida Chapter Facebook Group: http://www.facebook.com/#!/groups/114760941910314/

    Read the article

  • Calculated Fields - Idiosyncracies

    - by PointsToShare
    © 2011 By: Dov Trietsch. All rights reserved Calculated Fields and some of their Idiosyncrasies Did you try to write a calculate field formula directly into the screen? Good Luck – You’ll need it! Calculated Fields are a sophisticated OOB feature of SharePoint, so you could think that they are best left to the end users – at least to the power users. But they reach their limits before the “Professionals “do, and the tough ones come back to us anyway. Back to business; the simpler the formula, the easier it is. Still, use your favorite editor to write it, then cut it and paste it to the ridiculously small window. What about complex formulae? Write them in steps! Here is a case in point and an idiosyncrasy or two. Our welders need to be certified and recertified every two years. Some of them are certifiable…., but I digress. To be certified you need to pass an eye exam, and two more tests – test A and test B. for each of those you have an expiry date. When renewed, each expiry date is advanced by two years from the date of renewal. My users wanted a visual clue so that when the supervisor looks at the list, she’ll have a KPI symbol telling her if anything expired (Red), is going to expire within the next 90 days (Yellow) or is not to be worried about (green). Not all the dates are filled and any blank date implies a complete lack of certification in the particular requirement. Obviously, I needed to figure the minimal of these 3 dates – a simple enough formula: =MIN([Date_EyeExam], {Date_TestA], [Date_TestB]). Aha! Here is idiosyncrasy #1. When one of the dates is a null, MIN(Date1, Date2) returns the non null date. Null is construed as “Far, far away”. The funny thing is that when you compare it to Today, the null is the lesser one. So a null it is less than today, but not when MIN is calculated. Now, to me the fact that the welder does not have an exam date, is synonymous with his exam being prehistoric, or at least past due. So here is what I did: Solution: Let’s set a blank date to 1/1/1800. How will we do that? Use the IF. IF([Field] rel relValue, TrueValue, FalseValue). rel is any relationship operator <, >, <=, >=, =, <>. If the field is related to the relValue as prescribed, the “IF” returns the TrueValue, otherwise it returns the FalseValue. Thus: =IF([SomeDate]="",1/1/1800,[SomeDate]) will return 1/1/1800 if the date is blank and the date itself if not. So, using this formula, if the welder missed an exam, the returned exam date will be far in the past. It would be nice if we could take such a formula and make it into a reusable function. Alas, here is a calculated field serious shortcoming: You cannot write subs and functions!! Aha, but we can use interim calculated fields! So let’s create 3 calculated fields as follows: 1: c_DateTestA as a calculated field of the date type, with the formula:  IF([Date_TestA]="",1/1/1800,[Date_TestA]) 2: c_DateTestB as a calculated field of the date type, with the formula:  IF([Date_TestB]="",1/1/1800,[Date_TestB]) 3: c_DateEyeExam as a calculated field of the date type, with the formula:  IF([Date_EyeExam]="",1/1/1800,[Date_EyeExam]) And now use these to get c_MinDate. This is again a calculated field of type date with the formula: MIN(c_DateTestA, cDateTestB, c_DateEyeExam) Note that I missed the square parentheses. In “properly named fields – where there are no embedded spaces, we don’t need the square parentheses. I actually strongly recommend using underscores in place of spaces in all the field names in your lists. Among other things, it makes using CAML much simpler. Now, we still need to apply the KPI to this minimal date. I am going to use the available KPI graphics that come with SharePoint and are always available in your 12 hive. "/_layouts/images/kpidefault-2.gif" is the Red KPI "/_layouts/images/kpidefault-1.gif" is the Yellow KPI "/_layouts/images/kpidefault-0.gif" is the Green KPI And here is the nested IF formula that will do the trick: =IF(c_MinDate<=Today,"/_layouts/images/kpidefault-2.gif", IF(cMinDate<Today+90,"/_layouts/images/kpidefault-1.gif","/_layouts/images/kpidefault-0.gif")) Nice! BUT when I tested, it did not work! This is Idiosyncrasy #2: A calculated field based on a calculated field based on a calculated field does not work. You have to stop at two levels! Back to the drawing board: We have to reduce by one level. How? We’ll eliminate the c_DateX items in the formula and replace them with the proper IF formulae. Notice that this needs to be done with precision. You are much better off in doing it in your favorite line editor, than inside the cramped space that SharePoint gives you. So here is the result: MIN(IF([Date_TestA]="",1/1/1800,[ Date_TestA]), IF([Date_TestB]="",1/1/1800,[ Date_TestB]), 1/1/1800), IF([Date_EyeExam]="",1/1/1800,[Date_EyeExam])) Note that I bolded the parentheses and painted them red. They have to match for this formula to work. Now we can leave the KPI formula as is and test again. This time with SUCCESS! Conclusion: build the inner functions first, and then embed them inside the outer formulae. Do this as long as necessary. Use your favorite line editor. Limit yourself to 2 levels. That’s all folks! Almost! As soon as I finished doing all of the above, my users added yet another level of complexity. They added another test, a test that must be passed, but never expires and asked for yet another KPI, this time in Black to denote that any test is not just past due, but altogether missing. I just finished this. Let’s hope it ends here! And OH, the formula  =IF(c_MinDate<=Today,"/_layouts/images/kpidefault-2.gif",IF(cMinDate<Today+90,"/_layouts/images/kpidefault-1.gif","/_layouts/images/kpidefault-0.gif")) Deals with “Today” and this is a subject deserving a discussion of its own!  That’s all folks?! (and this time I mean it)

    Read the article

  • Rules for Naming

    - by PointsToShare
    © 2011 By: Dov Trietsch. All rights reserved Naming Documents (or is it “Document, Naming”?) Tis but thy name that is my enemy; Thou art thyself, though not a Montague. What's Montague? It is nor hand, nor foot, Nor arm, nor face, nor any other part Belonging to a man. O, be some other name! What's in a name? That which we call a rose By any other name would smell as sweet; So Romeo would, were he not Romeo call'd, Retain that dear perfection which he owes Without that title. Romeo, doff thy name And for that name which is no part of thee Take all myself.  Shakespeare – Romeo and Juliet Act II, Scene 2 We normally only use the bold portion of the famous Shakespearean quote above, but it is really out of context. As the play unfolds, we learn that a name is all too powerful. Indeed it is because of their names that the doomed lovers die. There might be life and death in a name (BTW, when I wrote this monogram, I was in Hatfield, PA. Remember the Hatfields and the McCoys?) This is a bit extreme, but in the field of Knowledge Management (KM) names are of the utmost importance as well. When I write an article about managing SharePoint sites, how should I name it? “Managing a site” or “Site, managing”? Nine times out of ten I’d opt for the latter. Almost everything we do is “Managing” so to make life easier for a person looking for meaningful content, we title our articles starting with the differentiator rather than the common factor. As a rule of thumb, we start the name with the noun rather than the verb. It is not what we do that is the primary key; it is what we do it to. So, answer this – is it a “rule of thumb” or a “thumb rule?” This is tough. A lot of what we do when naming is a judgment call. Both thumb and rule are nouns, albeit concrete and abstract (more about this later), but to most people “thumb rule” is meaningless while “rule of thumb” is an idiom. The difference between knowledge and information is that knowledge is meaningful information placed in context. Thus I elect the “rule of thumb”. It is the more meaningful title. Abstract and Concrete are relative terms. Many nouns (and verbs) that are abstract to a commoner, are concrete to a practitioner of one profession or another and may even have different concrete meanings in different professional jargons. Think about “running”. To an executive it means running a business, to a marathoner its meaning is much more literal. Generally speaking, we store and disseminate knowledge within a practice more than we do it in general. Even dictionaries encyclopedias define terms as they apply to different audiences. The rule of thumb is to put the more concrete first, but within the audience’s jargon. Even the title of this monogram is a question. Do I name it “Naming Documents” or “Documents, Naming”? Well, my own rule of thumb (“Here he goes again!?”) states that the latter is better because it starts with a noun, but this is a document about naming more than it about documents. The rules of naming also apply to graphs and charts, excel spreadsheets, and so on. Thus, I vote for the former.  A better title could have been “Naming Objects” only the word “Object” is a bit too abstract. How about just “Naming” or “Naming, rules of”? You get the drift. One of the ways to resolve all of this is to store the documents in Knowledge-Bases, which may become the subjects of a future punditry. Knowledge bases use keywords to describe their content.  Use a Metadata store for the keywords to at least attempt some common grounds. Here is another general rule (rule of thumb?!!) – put at least the one keyword in the title. Use subtitles. Here is an example: Migrating documents – Screening, cleaning, and organizing our knowledge. The main keyword is “documents”, next is “migrating”, other keywords also appear in the subtitle. They are “screening”, “cleaning”, and “organizing”. Any questions? Send me an amply named document by email: [email protected]

    Read the article

  • What's in a Name

    - by PointsToShare
    © 2011 By: Dov Trietsch. All rights reserved Microsoft – What’s in a name?   Long ago I heard a dentist joke. It went like this: A dentist went to a conference and at night he picked this girl and they went to his room. In the morning after, she asked: “Are you a dentist?” “Yes.” Said he. “You must be a very good one.” “I am. How did you guess?” “I didn’t feel a thing!!” That let my wild imagination roam. “Had it been Bill gates,” I thought, “the punch line would have been: ‘Now I know why you called it Microsoft!!” That's All Folks

    Read the article

  • Guide to reduce TFS database growth using the Test Attachment Cleaner

    - by terje
    Recently there has been several reports on TFS databases growing too fast and growing too big.  Notable this has been observed when one has started to use more features of the Testing system.  Also, the TFS 2010 handles test results differently from TFS 2008, and this leads to more data stored in the TFS databases. As a consequence of this there has been released some tools to remove unneeded data in the database, and also some fixes to correct for bugs which has been found and corrected during this process.  Further some preventive practices and maintenance rules should be adopted. A lot of people have blogged about this, among these are: Anu’s very important blog post here describes both the problem and solutions to handle it.  She describes both the Test Attachment Cleaner tool, and also some QFE/CU releases to fix some underlying bugs which prevented the tool from being fully effective. Brian Harry’s blog post here describes the problem too This forum thread describes the problem with some solution hints. Ravi Shanker’s blog post here describes best practices on solving this (TBP) Grant Holidays blogpost here describes strategies to use the Test Attachment Cleaner both to detect space problems and how to rectify them.   The problem can be divided into the following areas: Publishing of test results from builds Publishing of manual test results and their attachments in particular Publishing of deployment binaries for use during a test run Bugs in SQL server preventing total cleanup of data (All the published data above is published into the TFS database as attachments.) The test results will include all data being collected during the run.  Some of this data can grow rather large, like IntelliTrace logs and video recordings.   Also the pushing of binaries which happen for automated test runs, including tests run during a build using code coverage which will include all the files in the deployment folder, contributes a lot to the size of the attached data.   In order to handle this systematically, I have set up a 3-stage process: Find out if you have a database space issue Set up your TFS server to minimize potential database issues If you have the “problem”, clean up the database and otherwise keep it clean   Analyze the data Are your database( s) growing ?  Are unused test results growing out of proportion ? To find out about this you need to query your TFS database for some of the information, and use the Test Attachment Cleaner (TAC) to obtain some  more detailed information. If you don’t have too many databases you can use the SQL Server reports from within the Management Studio to analyze the database and table sizes. Or, you can use a set of queries . I find queries often faster to use because I can tweak them the way I want them.  But be aware that these queries are non-documented and non-supported and may change when the product team wants to change them. If you have multiple Project Collections, find out which might have problems: (Disclaimer: The queries below work on TFS 2010. They will not work on Dev-11, since the table structure have been changed.  I will try to update them for Dev-11 when it is released.) Open a SQL Management Studio session onto the SQL Server where you have your TFS Databases. Use the query below to find the Project Collection databases and their sizes, in descending size order.  use master select DB_NAME(database_id) AS DBName, (size/128) SizeInMB FROM sys.master_files where type=0 and substring(db_name(database_id),1,4)='Tfs_' and DB_NAME(database_id)<>'Tfs_Configuration' order by size desc Doing this on one of our SQL servers gives the following results: It is pretty easy to see on which collection to start the work   Find out which tables are possibly too large Keep a special watch out for the Tfs_Attachment table. Use the script at the bottom of Grant’s blog to find the table sizes in descending size order. In our case we got this result: From Grant’s blog we learnt that the tbl_Content is in the Version Control category, so the major only big issue we have here is the tbl_AttachmentContent.   Find out which team projects have possibly too large attachments In order to use the TAC to find and eventually delete attachment data we need to find out which team projects have these attachments. The team project is a required parameter to the TAC. Use the following query to find this, replace the collection database name with whatever applies in your case:   use Tfs_DefaultCollection select p.projectname, sum(a.compressedlength)/1024/1024 as sizeInMB from dbo.tbl_Attachment as a inner join tbl_testrun as tr on a.testrunid=tr.testrunid inner join tbl_project as p on p.projectid=tr.projectid group by p.projectname order by sum(a.compressedlength) desc In our case we got this result (had to remove some names), out of more than 100 team projects accumulated over quite some years: As can be seen here it is pretty obvious the “Byggtjeneste – Projects” are the main team project to take care of, with the ones on lines 2-4 as the next ones.  Check which attachment types takes up the most space It can be nice to know which attachment types takes up the space, so run the following query: use Tfs_DefaultCollection select a.attachmenttype, sum(a.compressedlength)/1024/1024 as sizeInMB from dbo.tbl_Attachment as a inner join tbl_testrun as tr on a.testrunid=tr.testrunid inner join tbl_project as p on p.projectid=tr.projectid group by a.attachmenttype order by sum(a.compressedlength) desc We then got this result: From this it is pretty obvious that the problem here is the binary files, as also mentioned in Anu’s blog. Check which file types, by their extension, takes up the most space Run the following query use Tfs_DefaultCollection select SUBSTRING(filename,len(filename)-CHARINDEX('.',REVERSE(filename))+2,999)as Extension, sum(compressedlength)/1024 as SizeInKB from tbl_Attachment group by SUBSTRING(filename,len(filename)-CHARINDEX('.',REVERSE(filename))+2,999) order by sum(compressedlength) desc This gives a result like this:   Now you should have collected enough information to tell you what to do – if you got to do something, and some of the information you need in order to set up your TAC settings file, both for a cleanup and for scheduled maintenance later.    Get your TFS server and environment properly set up Even if you have got the problem or if have yet not got the problem, you should ensure the TFS server is set up so that the risk of getting into this problem is minimized.  To ensure this you should install the following set of updates and components. The assumption is that your TFS Server is at SP1 level. Install the QFE for KB2608743 – which also contains detailed instructions on its use, download from here. The QFE changes the default settings to not upload deployed binaries, which are used in automated test runs. Binaries will still be uploaded if: Code coverage is enabled in the test settings. You change the UploadDeploymentItem to true in the testsettings file. Be aware that this might be reset back to false by another user which haven't installed this QFE. The hotfix should be installed to The build servers (the build agents) The machine hosting the Test Controller Local development computers (Visual Studio) Local test computers (MTM) It is not required to install it to the TFS Server, test agents or the build controller – it has no effect on these programs. If you use the SQL Server 2008 R2 you should also install the CU 10 (or later).  This CU fixes a potential problem of hanging “ghost” files.  This seems to happen only in certain trigger situations, but to ensure it doesn’t bite you, it is better to make sure this CU is installed. There is no such CU for SQL Server 2008 pre-R2 Work around:  If you suspect hanging ghost files, they can be – with some mental effort, deduced from the ghost counters using the following SQL query: use master SELECT DB_NAME(database_id) as 'database',OBJECT_NAME(object_id) as 'objectname', index_type_desc,ghost_record_count,version_ghost_record_count,record_count,avg_record_size_in_bytes FROM sys.dm_db_index_physical_stats (DB_ID(N'<DatabaseName>'), OBJECT_ID(N'<TableName>'), NULL, NULL , 'DETAILED') The problem is a stalled ghost cleanup process.  Restarting the SQL server after having stopped all components that depends on it, like the TFS Server and SPS services – that is all applications that connect to the SQL server. Then restart the SQL server, and finally start up all dependent processes again.  (I would guess a complete server reboot would do the trick too.) After this the ghost cleanup process will run properly again. The fix will come in the next CU cycle for SQL Server R2 SP1.  The R2 pre-SP1 and R2 SP1 have separate maintenance cycles, and are maintained individually. Each have its own set of CU’s. When it comes I will add the link here to that CU. The "hanging ghost file” issue came up after one have run the TAC, and deleted enourmes amount of data.  The SQL Server can get into this hanging state (without the QFE) in certain cases due to this. And of course, install and set up the Test Attachment Cleaner command line power tool.  This should be done following some guidelines from Ravi Shanker: “When you run TAC, ensure that you are deleting small chunks of data at regular intervals (say run TAC every night at 3AM to delete data that is between age 730 to 731 days) – this will ensure that small amounts of data are being deleted and SQL ghosted record cleanup can catch up with the number of deletes performed. “ This rule minimizes the risk of the ghosted hang problem to occur, and further makes it easier for the SQL server ghosting process to work smoothly. “Run DBCC SHRINKDB post the ghosted records are cleaned up to physically reclaim the space on the file system” This is the last step in a 3 step process of removing SQL server data. First they are logically deleted. Then they are cleaned out by the ghosting process, and finally removed using the shrinkdb command. Cleaning out the attachments The TAC is run from the command line using a set of parameters and controlled by a settingsfile.  The parameters point out a server uri including the team project collection and also point at a specific team project. So in order to run this for multiple team projects regularly one has to set up a script to run the TAC multiple times, once for each team project.  When you install the TAC there is a very useful readme file in the same directory. When the deployment binaries are published to the TFS server, ALL items are published up from the deployment folder. That often means much more files than you would assume are necessary. This is a brute force technique. It works, but you need to take care when cleaning up. Grant has shown how their settings file looks in his blog post, removing all attachments older than 180 days , as long as there are no active workitems connected to them. This setting can be useful to clean out all items, both in a clean-up once operation, and in a general There are two scenarios we need to consider: Cleaning up an existing overgrown database Maintaining a server to avoid an overgrown database using scheduled TAC   1. Cleaning up a database which has grown too big due to these attachments. This job is a “Once” job.  We do this once and then move on to make sure it won’t happen again, by taking the actions in 2) below.  In this scenario you should only consider the large files. Your goal should be to simply reduce the size, and don’t bother about  the smaller stuff. That can be left a scheduled TAC cleanup ( 2 below). Here you can use a very general settings file, and just remove the large attachments, or you can choose to remove any old items.  Grant’s settings file is an example of the last one.  A settings file to remove only large attachments could look like this: <!-- Scenario : Remove large files --> <DeletionCriteria> <TestRun /> <Attachment> <SizeInMB GreaterThan="10" /> </Attachment> </DeletionCriteria> Or like this: If you want only to remove dll’s and pdb’s about that size, add an Extensions-section.  Without that section, all extensions will be deleted. <!-- Scenario : Remove large files of type dll's and pdb's --> <DeletionCriteria> <TestRun /> <Attachment> <SizeInMB GreaterThan="10" /> <Extensions> <Include value="dll" /> <Include value="pdb" /> </Extensions> </Attachment> </DeletionCriteria> Before you start up your scheduled maintenance, you should clear out all older items. 2. Scheduled maintenance using the TAC If you run a schedule every night, and remove old items, and also remove them in small batches.  It is important to run this often, like every night, in order to keep the number of deleted items low. That way the SQL ghost process works better. One approach could be to delete all items older than some number of days, let’s say 180 days. This could be combined with restricting it to keep attachments with active or resolved bugs.  Doing this every night ensures that only small amounts of data is deleted. <!-- Scenario : Remove old items except if they have active or resolved bugs --> <DeletionCriteria> <TestRun> <AgeInDays OlderThan="180" /> </TestRun> <Attachment /> <LinkedBugs> <Exclude state="Active" /> <Exclude state="Resolved"/> </LinkedBugs> </DeletionCriteria> In my experience there are projects which are left with active or resolved workitems, akthough no further work is done.  It can be wise to have a cleanup process with no restrictions on linked bugs at all. Note that you then have to remove the whole LinkedBugs section. A approach which could work better here is to do a two step approach, use the schedule above to with no LinkedBugs as a sweeper cleaning task taking away all data older than you could care about.  Then have another scheduled TAC task to take out more specifically attachments that you are not likely to use. This task could be much more specific, and based on your analysis clean out what you know is troublesome data. <!-- Scenario : Remove specific files early --> <DeletionCriteria> <TestRun > <AgeInDays OlderThan="30" /> </TestRun> <Attachment> <SizeInMB GreaterThan="10" /> <Extensions> <Include value="iTrace"/> <Include value="dll"/> <Include value="pdb"/> <Include value="wmv"/> </Extensions> </Attachment> <LinkedBugs> <Exclude state="Active" /> <Exclude state="Resolved" /> </LinkedBugs> </DeletionCriteria> The readme document for the TAC says that it recognizes “internal” extensions, but it does recognize any extension. To run the tool do the following command: tcmpt attachmentcleanup /collection:your_tfs_collection_url /teamproject:your_team_project /settingsfile:path_to_settingsfile /outputfile:%temp%/teamproject.tcmpt.log /mode:delete   Shrinking the database You could run a shrink database command after the TAC has run in cases where there are a lot of data being deleted.  In this case you SHOULD do it, to free up all that space.  But, after the shrink operation you should do a rebuild indexes, since the shrink operation will leave the database in a very fragmented state, which will reduce performance. Note that you need to rebuild indexes, reorganizing is not enough. For smaller amounts of data you should NOT shrink the database, since the data will be reused by the SQL server when it need to add more records.  In fact, it is regarded as a bad practice to shrink the database regularly.  So on a daily maintenance schedule you should NOT shrink the database. To shrink the database you do a DBCC SHRINKDATABASE command, and then follow up with a DBCC INDEXDEFRAG afterwards.  I find the easiest way to do this is to create a SQL Maintenance plan including the Shrink Database Task and the Rebuild Index Task and just execute it when you need to do this.

    Read the article

  • The Obscured C Competition is Back!

    - by TATWORTH
    The Obscure C competition is back at http://www.ioccc.org/ The Competition is open to the 12/Jan/2012. The aim of the competition is: To write the most Obscure/Obfuscated C program under the rules at http://www.ioccc.org/2011/rules.txt. To show the importance of programming style, in an ironic way. To stress C compilers with unusual code. To illustrate some of the subtleties of the C language. To provide a safe forum for poor C code. :-) Even if you are not a C programmer, it is worth looking at some past entries at: http://en.wikipedia.org/wiki/International_Obfuscated_C_Code_Contest

    Read the article

  • Visual WebGui helps Dawsons put its Windows Forms HR system on the web

    - by Webgui
    Dawsons needed to upgrade their existing Windows Forms LAN human resources system to allow better and flexible access to the ever increasing data size as labor hire clients are demanding easy access to copies of tickets and licenses and etc. The company has some 30,000 applicants on file, but access to this data has been complex and limited with the current system. Therefore the IT department was asked to find a possible solutions that would allow creating a web based application while utilizing as much of the existing Windows Forms code as possible. Visual WebGui was found to be highly regarded in many frameworks comparisons so the team decided to give Visual WebGui a try. It didn’t take long for them to recognize that the Visual WebGui controls appear and react over the web the same as desktop controls. This and the fact that most of the code was directly ported which saved Dawsons hundreds of development hours are what make Visual WebGui so unique and productive. “My first impression of Visual WebGui was perhaps disbelief. Not being a seasoned Web programmer, I initially found it hard to accept so much functionality from a web based application. Also, the speed is exceptional” said John Sainsbury, Financial Controller of the Dawsons Group who added “Since working with Visual WebGui, I have showcased parts of the application to our major clients, some of whom use SAP portals, and they are amazed.” The result was so satisfying that the company is now looking to produce a mobile version for accessing the labor pool on the go. “By removing the barriers of the local network, Visual WebGui has changed how we can do business” said Lloyd Everist, General Manager Dawsons Group. Read more about this story

    Read the article

  • HT Link Sync Error after Ubuntu 10.04 LTS Installation

    - by marklab
    Update 1 I just assembled an exact replica of this server, and successfully installed Ubuntu 10.04 LTS in a RAID10 configuration. The success was confirmed by a login to the initial account. There must be a hardware component that is faulty. Since the error mentions HT, which I believe to be Hyper Threading, I will start with the CPUs. Please indicate if this error is more strongly associated with any other piece of hardware. Or make a recommendation of another approach that would be good for this issue. Issue I was attempting to install Ubuntu 10.04 LTS on this system with the board RAID10 configured. However, the installation failed at the partitioning stage by rebooting the system. Upon reboot, there is an error report after POST listing the following: Node0: NB WatchDog Timer Error Node1: HT Link Sync Error Node2: HT Link Sync Error ... Node7: HT Link Sync Error Press F1 to continue/resume. After pressing F1 the system will boot from the Ubuntu 10.04 LTS installation disc. However, it will fail at the same stage, and go through the same process from there. Hardware CPU: AMD OPTERON X12 6172 G34 2.1G 18MB Motherboard: Supermicro H8QG6-F HDD: WD Caviar Green 2TB 5.4K RPM Troubleshooting I disabled RAID10 on the system, and installed the Ubuntu on a single drive. It installed successfully. I then went back to a RAID10 setup and attempted to install on the system again, and was able to make it through the partitioning stage. However, upon reboot, the system reported: Error: file not found, and then booted me into the Grub Rescue console. I feel I have aggravated the problem at this point because when I attempted to install from the boot disc again, the system reboots upon hitting enter to even start the installation process. It does the same thing when trying to boot from an Ubuntu 11 disc. I have not been able to find any information on this HT Link Sync Error, which I feel may have started the problems I am experiencing now with the installation of the OS. I am also aware that Ubuntu is said not to be supported by the motherboard according to Supermicro's site. However, since I was able to install it successfully on a single drive, I do not believe it is incompatible. I would like to know a reason for why it's failing to install on/off.

    Read the article

  • Sun Grid Engine (SGE) Jobs Not Visible After Adding virtual_free

    - by Gary Richardson
    I'm trying to to use virtual_free to limit the number of large memory jobs running each grid node in my cluster. This seems to be working as expected. After I modified my code to submit jobs with the memory instances, qstat -f -q $queueName no longer shows a list of jobs waiting for a slot. The jobs are submitted with a specific queue (-q $queueName). I'm guessing this is happening due to the magic of SGE queue selection. Is there a way to make my jobs show up as before? Thanks! UPDATE I'm using: qstat -f -u * -q $queueName to view the queue. If I drop the queue argument, I can see the jobs. If I examine a specific job, I can see that it has the correct hard_queue_list value set. I'm also using Sun Grid Engine 6.1u4

    Read the article

  • JBoss basic access

    - by user101024
    I have JBoss 5 deployed on Solaris 10 - the servers connection has unrestricted high ports (1023) open to the internet. I can access the box via ssh & FTP from a second server on the same subnet and anywhere over the internet. JBoss is running over port 8080 and is accessible via http://locahost:8080 on the box itself. I cannot access it via http://ip.add.goes.here:8080 from either the other server on the same subnet or via the internet. Is there any service or configuration within JBoss or elsewhere on Solaris 10 that needs to be changed from default to allow http traffic to be served? Thanks, Kevin

    Read the article

  • FreeBSD 8.1 64bit logrotate - ELF interpreter /libexec/ld-elf-so.1 not found

    - by Richard Knop
    I am trying to get logrotate running on a FreeBSD 8.1 virtual machine. I installed the logrotate with pkg_add, I have created the logrotate.config file and also run: mkdir /var/lib/ touch /var/lib/logrotate.status Now when I do: /usr/local/sbin/logrotate -d /usr/local/etc/logrotate.conf I get this error: ELF interpreter /libexec/ld-elf-so.1 not found Abort The file ld-elf-so.1 exists: locate ld-elf.so.1 /libexec/ld-elf.so.1 /usr/libexec/ld-elf.so.1 /usr/share/man/man1/ld-elf.so.1.1.gz

    Read the article

  • Deleted exchange account still being auto suggested

    - by mike G
    I set up a new hire in our domain in exchange. When he arrived yesterday I discovered his name had been mispelled. I deleted his account and created a new account with proper spelling. The problem now is his old email address is being being suggested whenever anyone types in his first name. Users email the bad address get a bounce and create more help desk tickets. Is there a way to update exchange or purge the bad account?

    Read the article

  • Directly editing IIS 7 applicationHost.config configuration file

    - by lunadesign
    I know that IIS 7+ now uses XML config files instead of the metabase. I also know that if I edit a web.config file for a given site, IIS automagically detects the changes and implements any corresponding config changes. However, does this also apply to the server-level applicationHost.config settings file? (Its usually located in C:\windows\system32\inetsrv\config.) Specifically, is it safe to carefully edit this file instead of using IIS Manager or the appcmd command line utility? I couldn't find anything in the documentation that said it was okay or not okay to do this. I'm curious because I have to change the bindings for numerous sites from one IP to another. It would be much faster to simply do a global search and replace for the IP address in the config file instead of manually editing a few dozen sites in the GUI.

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16  | Next Page >