Search Results

Search found 55276 results on 2212 pages for 'eicar test string'.

Page 780/2212 | < Previous Page | 776 777 778 779 780 781 782 783 784 785 786 787  | Next Page >

  • File doesn't exist in Linux although it's located in Terminal

    - by Mazen Ayman
    I'm a bit new to unix/linux environment, but I have a small problem. I'm using "locate" to find the path of a file I need, it gives me the path for it, but the file doesn't exist in that path, like that: locate test1.txt /home/user/test files/text1.txt /home/user/test1.txt~ "test files" directory is where I was keeping the file and I copied it to the home directory once but I deleted it, no idea what it keeps telling me there is still a tmp file for it. it worth mentioning that I used the command: locate test1.txt~ |xargs -n1 rm to remove that tmp file, but maybe that what caused the problem. I tried to show hidden files, and check for temp files, didn't find it either. any clue what happened?

    Read the article

  • How can I remove packages using preseed?

    - by Jorge Castro
    I'm setting up an automated "no questions asked" preseed system and using Dustin Kirkland's server preseed as an example. He uses the following line to install three packages as part of the automated install: d-i pkgsel/include string byobu vim openssh-server I am looking for the inverse of this, basically be able to remove packages as part of the automated install. I've checked the Installation Guide I've checked this example preseed, but it's not clear if this is the canonical list of every single option available. I am thinking I need to to use d-i preseed/late_command string apt-remove packagename to clean up stuff I don't want when the install is done, but I am not sure

    Read the article

  • Installer gets stuck with a grayed out forward button.

    - by TRiG
    I have a CD with Ubuntu 10.10 and a laptop with Ubuntu 8.10. The laptop had all sorts of crud on it, and anything I wanted to keep was backed up on an external drive, so I was happy to do a wipe and reinstall instead of an update. So after a bit of faffing about trying to work out how to get the thing to boot from the CD drive, I did that. So the screen comes up with the choice: the options are Try Ubuntu and Install Ubuntu. I choose to install and to overwrite my current installation. So far so good. I then get a progress bar labelled something like copying files (I forget the exact wording) and further options to fill in for my location, keyboard locale, username and password. On each of these screens there are forward and back buttons. On the last screen (password), the forward button is greyed out. Well, I think to myself, no doubt it will become active when that copying files progress bar completes. The progress bar never completes. It hangs. And the label changes from copying files to the chirpy ready when you are. The forward button remains greyed out. The back button is as unhelpful as you'd expect it to be. And there's nothing else to click. We have reached an impasse. I tried restarting the laptop, to test whether it actually was properly installed. It wasn't. I tried to run Ubuntu live from the CD, to test whether the disk was damaged. That wouldn't work either, but I suspect it's just because the laptop is old and has a slow disk drive. I'm typing this question on another computer using the Ubuntu live CD and it's working fine. So there's nothing wrong with the CD.

    Read the article

  • Alternative way of developing for ASP.NET to WebForms - Any problems with this?

    - by John
    So I have been developing in ASP.NET WebForms for some time now but often get annoyed with all the overhead (like ViewState and all the JavaScript it generates), and the way WebForms takes over a lot of the HTML generation. Sometimes I just want full control over the markup and produce efficient HTML of my own so I have been experimenting with what I like to call HtmlForms. Essentially this is using ASP.NET WebForms but without the form runat="server" tag. Without this tag, ASP.NET does not seem to add anything to the page at all. From some basic tests it seems that it runs well and you still have the ability to use code-behind pages, and many ASP.NET controls such as repeaters. Of course without the form runat="server" many controls won't work. A post at Enterprise Software Development lists the controls that do require the tag. From that list you will see that all of the form elements like TextBoxes, DropDownLists, RadioButtons, etc cannot be used. Instead you use normal HTML form controls. But how do you access these HTML controls from the code behind? Retrieving values on post back is easy, you just use Request.QueryString or Request.Form. But passing data to the control could be a little messy. Do you use a ASP.NET Literal control in the value field or do you use <%= value % in the markup page? I found it best to add runat="server" to my HTML controls and then you can access the control in your code-behind like this: ((HtmlInputText)txtName).Value = "blah"; Here's a example that shows what you can do with a textbox and drop down list: Default.aspx <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="Default.aspx.cs" Inherits="NoForm.Default" %> <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="Default.aspx.cs" Inherits="NoForm.Default" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN""http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server"> <title></title> </head> <body> <form action="" method="post"> <label for="txtName">Name:</label> <input id="txtName" name="txtName" runat="server" /><br /> <label for="ddlState">State:</label> <select id="ddlState" name="ddlState" runat="server"> <option value=""></option> </select><br /> <input type="submit" value="Submit" /> </form> </body> </html> Default.aspx.cs using System; using System.Web.UI.HtmlControls; using System.Web.UI.WebControls; namespace NoForm { public partial class Default : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { //Default values string name = string.Empty; string state = string.Empty; if (Request.RequestType == "POST") { //If form submitted (post back) name = Request.Form["txtName"]; state = Request.Form["ddlState"]; //Server side form validation would go here //and actions to process form and redirect } ((HtmlInputText)txtName).Value = name; ((HtmlSelect)ddlState).Items.Add(new ListItem("ACT")); ((HtmlSelect)ddlState).Items.Add(new ListItem("NSW")); ((HtmlSelect)ddlState).Items.Add(new ListItem("NT")); ((HtmlSelect)ddlState).Items.Add(new ListItem("QLD")); ((HtmlSelect)ddlState).Items.Add(new ListItem("SA")); ((HtmlSelect)ddlState).Items.Add(new ListItem("TAS")); ((HtmlSelect)ddlState).Items.Add(new ListItem("VIC")); ((HtmlSelect)ddlState).Items.Add(new ListItem("WA")); if (((HtmlSelect)ddlState).Items.FindByValue(state) != null) ((HtmlSelect)ddlState).Value = state; } } } As you can see, you have similar functionality to ASP.NET server controls but more control over the final markup, and less overhead like ViewState and all the JavaScript ASP.NET adds. Interestingly you can also use HttpPostedFile to handle file uploads using your own input type="file" control (and necessary form enctype="multipart/form-data"). So my question is can you see any problems with this method, and any thoughts on it's usefulness? I have further details and tests on my blog.

    Read the article

  • Today's Links (6/17/2011)

    - by Bob Rhubart
    Call for Nominations: Oracle Eco-Enterprise Innovation Awards Is your organization using Oracle products to reduce your environmental footprint while reducing costs? If so, submit your nomination for Oracle's Eco-Enterprise Innovation award. These awards will be presented to select customers and their partners who are using any of Oracle's products to not only take an environmental lead, but also to reduce their costs and improve their business efficiencies by using green business practices. Beyond The Data Grid: Coherence, Normalization, Joins, and Linear Scalability | Ben Stopford Ben Stopford presents ODC, a highly distributed in-memory normalized NoSQL datastore designed for scalability, based on normalized data, Snowflake Schema, and Connected Replication pattern. Upgrading ALSB services to OSB | John Chin-a-Woeng John Chin-a-Woeng walks you through the upgrade from Aqualogic Service Bus (ALSB 3.0) to Oracle Service Bus (OSB 10.3). SOA & Middleware: Pinning tasks to a user in BPM 11g | Niall Commiskey Commiskey illustrates a scenario. JDeveloper 11gR2: New option Test WebService in WSDL editor | Lucas Jellema The "Test WebService" button in the WSDL Editor in JDeveloper 11gr2 is "just a little feature addition," says Oracle ACE Director Lucas Jellema. "But it can be quite useful all the same." Enterprise Business Intelligence 11g Seminar with Mark Rittman Oracle ACE Director Mark Rittman conducts a two-day course for Oracle University, in Dublin, IE, July 4-5, 2011. Data Integration Webcast Series Join Oracle experts for a series covering our data integration solutions. You’ll get invaluable information to help boost your data infrastructure so that you can accelerate your business.

    Read the article

  • C#/.NET Little Wonders: Getting Caller Information

    - by James Michael Hare
    Originally posted on: http://geekswithblogs.net/BlackRabbitCoder/archive/2013/07/25/c.net-little-wonders-getting-caller-information.aspx Once again, in this series of posts I look at the parts of the .NET Framework that may seem trivial, but can help improve your code by making it easier to write and maintain. The index of all my past little wonders posts can be found here. There are times when it is desirable to know who called the method or property you are currently executing.  Some applications of this could include logging libraries, or possibly even something more advanced that may server up different objects depending on who called the method. In the past, we mostly relied on the System.Diagnostics namespace and its classes such as StackTrace and StackFrame to see who our caller was, but now in C# 5, we can also get much of this data at compile-time. Determining the caller using the stack One of the ways of doing this is to examine the call stack.  The classes that allow you to examine the call stack have been around for a long time and can give you a very deep view of the calling chain all the way back to the beginning for the thread that has called you. You can get caller information by either instantiating the StackTrace class (which will give you the complete stack trace, much like you see when an exception is generated), or by using StackFrame which gets a single frame of the stack trace.  Both involve examining the call stack, which is a non-trivial task, so care should be done not to do this in a performance-intensive situation. For our simple example let's say we are going to recreate the wheel and construct our own logging framework.  Perhaps we wish to create a simple method Log which will log the string-ified form of an object and some information about the caller.  We could easily do this as follows: 1: static void Log(object message) 2: { 3: // frame 1, true for source info 4: StackFrame frame = new StackFrame(1, true); 5: var method = frame.GetMethod(); 6: var fileName = frame.GetFileName(); 7: var lineNumber = frame.GetFileLineNumber(); 8: 9: // we'll just use a simple Console write for now 10: Console.WriteLine("{0}({1}):{2} - {3}", 11: fileName, lineNumber, method.Name, message); 12: } So, what we are doing here is grabbing the 2nd stack frame (the 1st is our current method) using a 2nd argument of true to specify we want source information (if available) and then taking the information from the frame.  This works fine, and if we tested it out by calling from a file such as this: 1: // File c:\projects\test\CallerInfo\CallerInfo.cs 2:  3: public class CallerInfo 4: { 5: Log("Hello Logger!"); 6: } We'd see this: 1: c:\projects\test\CallerInfo\CallerInfo.cs(5):Main - Hello Logger! This works well, and in fact CallStack and StackFrame are still the best ways to examine deeper into the call stack.  But if you only want to get information on the caller of your method, there is another option… Determining the caller at compile-time In C# 5 (.NET 4.5) they added some attributes that can be supplied to optional parameters on a method to receive caller information.  These attributes can only be applied to methods with optional parameters with explicit defaults.  Then, as the compiler determines who is calling your method with these attributes, it will fill in the values at compile-time. These are the currently supported attributes available in the  System.Runtime.CompilerServices namespace": CallerFilePathAttribute – The path and name of the file that is calling your method. CallerLineNumberAttribute – The line number in the file where your method is being called. CallerMemberName – The member that is calling your method. So let’s take a look at how our Log method would look using these attributes instead: 1: static int Log(object message, 2: [CallerMemberName] string memberName = "", 3: [CallerFilePath] string fileName = "", 4: [CallerLineNumber] int lineNumber = 0) 5: { 6: // we'll just use a simple Console write for now 7: Console.WriteLine("{0}({1}):{2} - {3}", 8: fileName, lineNumber, memberName, message); 9: } Again, calling this from our sample Main would give us the same result: 1: c:\projects\test\CallerInfo\CallerInfo.cs(5):Main - Hello Logger! However, though this seems the same, there are a few key differences. First of all, there are only 3 supported attributes (at this time) that give you the file path, line number, and calling member.  Thus, it does not give you as rich of detail as a StackFrame (which can give you the calling type as well and deeper frames, for example).  Also, these are supported through optional parameters, which means we could call our new Log method like this: 1: // They're defaults, why not fill 'em in 2: Log("My message.", "Some member", "Some file", -13); In addition, since these attributes require optional parameters, they cannot be used in properties, only in methods. These caveats aside, they do let you get similar information inside of methods at a much greater speed!  How much greater?  Well lets crank through 1,000,000 iterations of each.  instead of logging to console, I’ll return the formatted string length of each.  Doing this, we get: 1: Time for 1,000,000 iterations with StackTrace: 5096 ms 2: Time for 1,000,000 iterations with Attributes: 196 ms So you see, using the attributes is much, much faster!  Nearly 25x faster in fact.  Summary There are a few ways to get caller information for a method.  The StackFrame allows you to get a comprehensive set of information spanning the whole call stack, but at a heavier cost.  On the other hand, the attributes allow you to quickly get at caller information baked in at compile-time, but to do so you need to create optional parameters in your methods to support it. Technorati Tags: Little Wonders,CSharp,C#,.NET,StackFrame,CallStack,CallerFilePathAttribute,CallerLineNumberAttribute,CallerMemberName

    Read the article

  • DCHP and Router load testing

    - by John H
    I manage a campground wifi network with an average of 10 - 60 active users. I have encountered issues where the router starts acting flaky (failing to assign DHCP or failing to pass traffic) without any clear warning (low cpu utilization, etc). I upgraded the router a couple times and ended up with a Netgear ProSafe VPN router that seems to be handling the traffic. The interesting thing is that the Netgear has lower specs than the Buffalo router it replaced, indicating the issue is with the DD-WRT firmware. While I'll be pursuing this issue on the dd-wrt forums, I need a way to test routers. My vision is having 1-2 computers connected on the LAN side and 1-2 computers connected on the WAN side. I want the LAN computers to be generating various type of traffic and connections, as well as requesting DCHP addresses. A few notes: The wireless aspect should be a non-issue. Most clients would connect to a wireless bridge and come into the router through a network cable. I had a monitoring server with Nagios running check_dhcp against the router. This server was connected directly by a network cable, eliminating wifi bridges and other devices from the equation. This question is somewhat related, but not exactly: Load testing wireless LANs I am going to look at IxChariot. While I'd ideally like to use a 1 computer on each side running Linux and preferably free software, I can entertain running Windows, multiple computers, or non-free software. Total bandwidth doesn't seem to be the issue. I can transfer large files all day. Even on the busiest days, the users seemed to only pull ~5Mbps. There is very little "LAN to LAN traffic" and most of it might never have reached the main router. The issue I need to test for seems to be tied to active users, or more appropriately, active sessions. I know active users or active clients is a meaningless term from a router standpoint and wouldn't mind having more appropriate terms to use. Summary: I need a way to test a routers ability in handling traffic from a large number of clients. My current strategy is to purchase a router, deploy it, and see how it fails in the live environment.

    Read the article

  • Problem with OWA search. Mailstore catalogues

    - by Anthony
    Dear All, I have trawled for the last few days but have not been able to come up with a solution; any help would be much appreciated. Exchange 2007 STD on Windows 2003 64Bit and users report search not working in OWA. So I first check indexing is enabled on the mail stores Get-MailboxDatabase |ft Name,IndexEnabled which all report true, So I try a test Test-ExchangeSearch [email protected] Which is a false -1 Ok, So I run the reset - resetsearchindex.ps1 this works and the event log reports the stores being crawled and finished. BUT- My complete catalogue data is 300kb - LOL! I wish! Obviously all tests fail so I do it manually; Stop indexing service, Delete catalogues. Restart Still no dice, This is where I’m up to, I have tried many different variations of the above and being through oodles of MS documentation Has anyone got any ideas?

    Read the article

  • Why does Javascript use JSON.stringify instead of JSON.serialize?

    - by Chase Florell
    I'm just wondering about "stringify" vs "serialize". To me they're the same thing (though I could be wrong), but in my past experience (mostly with asp.net) I use Serialize() and never use Stringify(). I know I can create a simple alias in Javascript, // either JSON.serialize = function(input) { return JSON.stringify(input); }; // or JSON.serialize = JSON.stringify; http://jsfiddle.net/HKKUb/ but I'm just wondering about the difference between the two and why stringify was chosen. for comparison purpose, here's how you serialize XML to a String in C# public static string SerializeObject<T>(this T toSerialize) { XmlSerializer xmlSerializer = new XmlSerializer(toSerialize.GetType()); StringWriter textWriter = new StringWriter(); xmlSerializer.Serialize(textWriter, toSerialize); return textWriter.ToString(); }

    Read the article

  • How to convert .flv file to .3gp using ffmpeg?

    - by Chetana
    I have converted any video format to 3gp file format using ffmpeg on one server. But on another server it not works. Following is my script: exec("ffmpeg -i test.flv -sameq -acodec libmp3lame -ar 22050 -ab 96000 -deinterlace -nr 500 -s 320x240 -aspect 4:3 -r 20 -g 500 -me_range 20 -b 270k -deinterlace -f flv -y test.3gp "); Can anyone tell me what is wrong in script? Following is my ffmpeg setting: root@ninja [~]# ffmpeg -formats ffmpeg version CVS, build 3277056, Copyright (c) 2000-2004 Fabrice Bellard configuration: --enable-mp3lame --enable-libogg --enable-gpl --disable-mmx --enable-shared built on Jun 17 2009 10:51:43, gcc: 4.1.2 20080704 (Red Hat 4.1.2-44)

    Read the article

  • qemu/virt-manager no permisson on shared folder

    - by TomAtToe
    I have a strange problem. Iam trying to create a shared folder via the add-hardware-filesystem option. For Type and Modus i choose Passtrought and for Driver Path. The Source Path is /free and target is mytag. I mount it with: mount -t 9p -o trans=virtio mytag /mnt/test -oversion=9p2000.L Everything worked without problems. But when i enter /mnt/test and do a ls, i get "ls: Öffnen von Verzeichnis . nicht möglich: Keine Berechtigung" in english something like "ls: cant open folder . no permission" I set permissions of /free to 777 recursivly but nothing changed. Also tried some other modes in virt-manager but nothing changes. Do you have any clues, what i am doing wrong? The guest-os ist Ubuntu 12.04 and the host-os is Ubuntu 11.10 Thank you for your help.

    Read the article

  • How to mail merge a hyperlink in Microsoft Word or Publisher 2010

    - by hjoelr
    I am trying to do an e-mail merge in Microsoft Publisher 2010 (which appears to do mail merging like Microsoft Word) and I'm wanting a merged email address to automatically be hyperlinked in the resulting email. For example, one of the merge fields could be "EmailAddress" with an example address being [email protected]. In the document, I would want the merge field "EmailAddress" to display as the default text in an hyperlink and also set the target of the hyperlink to "mailto:EmailAddress" (eg. mailto:[email protected]). I can't figure out how to get Publisher 2010 to do that. I would think that it's possible, though. Any help or pointers would be greatly appreciated!

    Read the article

  • The path to MCSE:SharePoint. The Overview.

    - by Enrique Lima
    There have been some changes to certifications recently.  And with that new challenges and requirements.  In the past we had MCTS and MCITP or MCPD on a specific product and that was it, now the story is somewhat different. You will need to not only know the product (yes, I am one that still focuses on knowing a product not the test) but also the environment on which it sits or lives (therefore Windows Server and supporting services). The requirements for MCSE: SharePoint now take you through the MCSA:Windows Server 2012.  Many have questioned this, I don’t. Why? I have seen plenty of “accidental SharePoint Farm Administrators” that have no background with the Server OS, much less with the services (like DNS and IIS).  Again, I am not saying this will guarantee knowledge but it does in some way require exposure to it. So, again, the next number of posts will be to provide guidance for the needed knowledge for the test requirements.  If you have seen the way I go about this, you then know I don’t focus on exam questions, but rather providing guidance to the TechNet and MSDN documentation to get to know the product.  I will also in this case go through the process to setup your virtual environment to play with the products and get to know them. Now, the requirements themselves are: MCSA: Windows Server 2012. Exam 70-410: Installing and Configuring Windows Server 2012 Exam 70-411: Administering Windows Server 2012 Exam 70-412: Configuring Advanced Windows Server 2012 Services SharePoint specific exams. Exam 70-331: Core Solutions of Microsoft SharePoint Server 2013 Exam 70-332: Advanced Solutions of Microsoft SharePoint Server 2013 Passing the 5 exams will grant you the MCSE: SharePoint credential.

    Read the article

  • mail server administration

    - by kibs
    MY postfix does not show that it is listening to the smtp daemon getting mesaage below: The message WAS NOT relayed Reporting-MTA: dns; mail.mak.ac.ug Received-From-MTA: smtp; mail.mak.ac.ug ([127.0.0.1]) Arrival-Date: Wed, 19 May 2010 12:45:20 +0300 (EAT) Original-Recipient: rfc822;[email protected] Final-Recipient: rfc822;[email protected] Action: failed Status: 5.4.0 Remote-MTA: dns; 127.0.0.1 Diagnostic-Code: smtp; 554 5.4.0 Error: too many hops Last-Attempt-Date: Wed, 19 May 2010 12:45:20 +0300 (EAT) Final-Log-ID: 23434-08/A38QHg8z+0r7 undeliverable mail MTA BLOCKED OUTPUT FROM lsof -i tcp:25 command master 3014 root 12u IPv4 9429 TCP *:smtp (LISTEN) (Postfix as a user is missing )

    Read the article

  • Rsync over ssh: "ERROR: module is read only" suddenly appeared

    - by user978548
    I've used from some time rsync/ssh to backup my shared host contents to my personal Synology NAS (212j for that matter), and it worked quite well. For information, I use a password-less ssh connection. 3 days ago, I updated my NAS software and since (or at least I believe it's since that), the backup won't work anymore. I get the following error on the host: rsync: writefd_unbuffered failed to write 4 bytes to socket [sender]: Broken pipe (32) ERROR: module is read only ..which I do not understand. beside that nothing changed that I know of in both source and destination that can be related to rsync or ssh, I did check a few things and all seems to be alright: I can still connect through ssh from the host to my NAS with the good user, so ssh stuff like keys haven't changed. I also have the correct file permissions on the NAS (I checked, and also tried to create files, directories, .. with the user used by rsync through ssh). I read here and there that the error means that I have to ensure that my rsyncd.conf have the right read only = no in it, but as far as I know, I never used rsyncd as well as I never configured anything for it and until now it worked like a charm.. I use the following command to do the backup: rsync -ab --recursive \ --files-from="$FILES_FROM" \ --backup-dir=backup_$SUFFIX \ --delete \ --filter='protect backup_*' \ $WDIRECTORY/ \ remote_backup:$REMOTE_BACKUP/ So I'm stuck and really can't figure out what happened. Edit: As suggested in comments, I also tried passing commands to ssh (but not from inside a ssh session), that worked as expected, and also tried a single rsync command, which didnt worked, failing just like the complete backup command. (sharedHost):hostuser:~ > touch test.txt (sharedHost):hostuser:~ > rsync test.txt remote_backup:backups/test.txt ERROR: module is read only rsync error: syntax or usage error (code 1) at main.c(1034) [Receiver=3.0.8] rsync: connection unexpectedly closed (9 bytes received so far) [sender] rsync error: error in rsync protocol data stream (code 12) at io.c(601) [sender=3.0.7] and (sharedHost):hostuser:~ > ssh remote_backup 'touch /abs_path_to_backups/backups/test2.txt && echo "ProoF" > /abs_path_to_backups/backups/test2.txt' (sharedHost):hostuser:~ > ssh remote_backup 'cat /abs_path_to_backups/backups/test2.txt' ProoF

    Read the article

  • XNA Skinned Animated Mesh Rendering Exported from Maya

    - by Devin Garner
    I am working on translating an old RTS game engine I wrote from DirectX9 to XNA. My old models didn't have animation & are an old format, so I'm trying with an FBX file. I temporarily "borrowed" a model from League of Legends just to test if my rendering is working correctly. I imported the mesh/bones/skin/animation into Maya 2012 using an "unnamed" 3rd-party import tool. (obviously I'll have to get legit models later, but I just want to test if my programming is correct). Everything looks correct in maya and it renders the animations flawlessly. I exported everything into a single FBX file (with only a single animation). I then tried to load this model using the example at the following site: http://create.msdn.com/en-US/education/catalog/sample/skinned_model With my exported FBX, the animation looks correct for most of the frames, however at random times it screws up for a split second. Basically, the body/arms/head will look right, but the leg/foot will shoot out to a random point in space for a second & then go back to the normal position. The original FBX from the sample looks correct in my program. It seems odd that my model was imported into maya wrong, since it displays fine in Maya. So, I'm thinking either I'm exporting it wrong, or the sample code is bad & the model from the sample caters to the samples bad code. I'm new to 3D programming & maya, so chances are I'm doing something wrong in the export. I'm using mostly the defaults, but I've tried all 3 interpolation modes (quaternion, euler, resample). Thanks

    Read the article

  • Grading an algorithm: Readability vs. Compactness

    - by amiregelz
    Consider the following question in a test \ interview: Implement the strcpy() function in C: void strcpy(char *destination, char *source); The strcpy function copies the C string pointed by source into the array pointed by destination, including the terminating null character. Assume that the size of the array pointed by destination is long enough to contain the same C string as source, and does not overlap in memory with source. Say you were the tester, how would you grade the following answers to this question? 1) void strcpy(char *destination, char *source) { while (*source != '\0') { *destination = *source; source++; destionation++; } *destionation = *source; } 2) void strcpy(char *destination, char *source) { while (*(destination++) = *(source++)) ; } The first implementation is straightforward - it is readable and programmer-friendly. The second implementation is shorter (one line of code) but less programmer-friendly; it's not so easy to understand the way this code is working, and if you're not familiar with the priorities in this code then it's a problem. I'm wondering if the first answer would show more complexity and more advanced thinking, in the tester's eyes, even though both algorithms behave the same, and although code readability is considered to be more important than code compactness. It seems to me that since making an algorithm this compact is more difficult to implement, it will show a higher level of thinking as an answer in a test. However, it is also possible that a tester would consider the first answer not good because it's not readable. I would also like to mention that this is not specific to this example, but general for code readability vs. compactness when implementing an algorithm, specifically in tests \ interviews.

    Read the article

  • Mac OS X: at command not working

    - by David.Chu.ca
    I am going to schedule a job by using at command. Here I tried the following command: $ at now + 1 minute echo 'Test at command' <EOD> I saw the job is scheduled by using at -l. However, I saw no echo out. I guess that I may need to add user to at.allow file. I cannot find at.allow in my Mac (Snow Leopard). Not sure what I need to do to test this at command?

    Read the article

  • JUnit Testing in Multithread Application

    - by e2bady
    This is a problem me and my team faces in almost all of the projects. Testing certain parts of the application with JUnit is not easy and you need to start early and to stick to it, but that's not the question I'm asking. The actual problem is that with n-Threads, locking, possible exceptions within the threads and shared objects the task of testing is not as simple as testing the class, but testing them under endless possible situations within threading. To be more precise, let me tell you about the design of one of our applications: When a user makes a request several threads are started that each analyse a part of the data to complete the analysis, these threads run a certain time depending on the size of the chunk of data (which are endless and of uncertain quality) to analyse, or they may fail if the data was insufficient/lacking quality. After each completed its analysis they call upon a handler which decides after each thread terminates if the collected analysis-data is sufficient to deliver an answer to the request. All of these analysers share certain parts of the applications (some parts because the instances are very big and only a certain number can be loaded into memory and those instances are reusable, some parts because they have a standing connection, where connecting takes time, ex.gr. sql connections) so locking is very common (done with reentrant-locks). While the applications runs very efficient and fast, it's not very easy to test it under real-world conditions. What we do right now is test each class and it's predefined conditions, but there are no automated tests for interlocking and synchronization, which in my opionion is not very good for quality insurances. Given this example how would you handle testing the threading, interlocking and synchronization?

    Read the article

  • Dependency injection: what belongs in the constructor?

    - by Adam Backstrom
    I'm evaluating my current PHP practices in an effort to write more testable code. Generally speaking, I'm fishing for opinions on what types of actions belong in the constructor. Should I limit things to dependency injection? If I do have some data to populate, should that happen via a factory rather than as constructor arguments? (Here, I'm thinking about my User class that takes a user ID and populates user data from the database during construction, which obviously needs to change in some way.) I've heard it said that "initialization" methods are bad, but I'm sure that depends on what exactly is being done during initialization. At the risk of getting too specific, I'll also piggyback a more detailed example onto my question. For a previous project, I built a FormField class (which handled field value setting, validation, and output as HTML) and a Model class to contain these fields and do a bit of magic to ease working with fields. FormField had some prebuilt subclasses, e.g. FormText (<input type="text">) and FormSelect (<select>). Model would be subclassed so that a specific implementation (say, a Widget) had its own fields, such as a name and date of manufacture: class Widget extends Model { public function __construct( $data = null ) { $this->name = new FormField('length=20&label=Name:'); $this->manufactured = new FormDate; parent::__construct( $data ); // set above fields using incoming array } } Now, this does violate some rules that I have read, such as "avoid new in the constructor," but to my eyes this does not seem untestable. These are properties of the object, not some black box data generator reading from an external source. Unit tests would progressively build up to any test of Widget-specific functionality, so I could be confident that the underlying FormFields were working correctly during the Widget test. In theory I could provide the Model with a FieldFactory() which could supply custom field objects, but I don't believe I would gain anything from this approach. Is this a poor assumption?

    Read the article

  • Using old RAID configured disk after new disk has been used in the controller

    - by Narendra
    I have Dell Poweredge T100 server with Dell SAS 6 and two hard disk on RAID 1. Last week the server died including one RAID 1 hard disk. We sent the server for repair and the problem with PSU was fixed. But the repair guys also checked the RAID controller by configuring new RAID with their test hard disk. Now if I install one working RAID 1 disk and one new disk, will the RAID controller let me continue my old RAID 1 and resync the new disk and continue? What I fear is the RAID controller will want the test hard from repair guys. Thus I have to re configure RAID 1 forcing me to wipe the working disc. If so, I've to backup the working disc, reconfigure RAID 1 and reinstall? Or is there better way? Note: I'm using DELL SAS confiugratio utility to manage RAID. (Press CTRL+C after BIOS)

    Read the article

  • Ubuntu One API Java - how to use REST and AccessToken?

    - by Michael
    I am writing a java app in eclipse that backups data to several consumer-cloud-services encrypted and redundant. So far, I successfully implemented the authentication process, as it is described in the documentation. At this point, I do not know how to proceed. The next step would be implementing the auth with the stored AccessToken and afterwars implementing upload/download/listing functionality through the REST API. I think I have to store the String oauth.getSerialized(). How do I authenticate with this String afterwards? This does not work e.g.: AuthenticateResponse oauth = api.authenticate(serialized); api.setAuthorizer(new OAuthAuthorizer(oauth)); Can someone tell me please, how I can use the REST API with java? There is no explanation or link in the developers area as far as I saw. And btw, I wasted at least one hour trying to fix errors, because some needed libraries are listet after the example code. :/

    Read the article

  • What is use of universal character names in identifiers in C++

    - by Jan Hudec
    The C++ standard (I noticed it in the new one, but it did already exist in C++03) specifies universal character names, written as \uNNNN and \UNNNNNNNN and representing the characters with unicode codepoints NNNN/NNNNNNNN. This is useful with string literals, especially since explicitly UTF-8, UTF-16 and UCS-4 string literals are also defined. However, the universal character literals are also allowed in identifiers. What is the motivation behind that? The syntax is obviously totally unreadable, the identifiers may be mangled for the linker and it's not like there was any standard function to retrieve symbols by name anyway. So why would anybody actually use an identifier with universal character literals in it? Edit: Since it actually existed in C++03 already, additional question would be whether you actually saw a code that used it?

    Read the article

  • Flashback Database

    - by Sebastian Solbach (DBA Community)
    Flashback Database bezeichnet die Funktionalität der Oracle Datenbank, die Datenbank zeitlich auf einen bestimmten Punkt, respektive eine bestimmte System Change Number (SCN) zurücksetzen zu können - vergleichbar mit einem Rückspulknopf eines Kassettenrekorders oder der Rücksetztaste eines CD-Players. Mag dieses Vorgehen bei Produktivsystemen eher selten Einsatz finden, da beim Rücksetzten alle Daten nach dem zurückgesetzten Zeitpunkt verloren wären (es sei denn man würde dieser vorher exportieren), gibt es gerade für Test- oder Standby Systeme viele Einsatzmöglichkeiten: Rücksetzten des Systems bei fehlgeschlagenen Applikations-Upgrade Alternatives Point in Time Recovery (PITR) mit anschließendem Roll Forward (besonders geeignet bei Standby Systemen) Testdatenbank mit definiertem, reproduzierbaren Ausgangspunkt (z.B. für Real Application Testing) Datenbank Upgrade Test Einige bestehende Datenbank Funktionalitäten verwenden Flashback Database implizit: Snapshot Standby Reinstanziierung der Standby (z.B. bei Fast Start Failover) Obwohl diese Funktionalität gerade für Standby Systeme und Testsysteme bestens geeignet ist, gibt es eine gewisse Zurückhaltung Flashback Database einzusetzen. Eine Ursache ist oft die Angst vor zusätzlicher Last, die das Schreiben der Flashback Logs erzeugt, sowie der zusätzlich benötigte Plattenplatz. Dabei ist die Last im Normalfall relativ gering (ca. 5%) und auch der zusätzlich benötigte Platz für die Flashback Logs lässt sich relativ genau bestimmen. Ebenfalls wird häufig nicht beachtet, dass es auch ohne das explizite Einschalten der Flashback Logs möglich ist, einen garantieren Rücksetzpunkt (Guaranteed Restore Point kurz GRP) festzulegen, und die Datenbank dann auf diesen Restore Point zurückzusetzen. Das Setzen eines garantierten Rücksetzpunktes funktioniert in 11gR2 im laufenden Betrieb. Wie dies genau funktioniert, welche Unterschiede es zum generellen Einschalten von Flashback Logs gibt, wie man Flashback Database monitoren kann und was es sonst noch zu berücksichtigen gibt, damit beschäftigt sich dieser Tipp.

    Read the article

  • Puppet master/agent basic setup

    - by lewap
    I'm trying to setup a basic puppet agent/master use-case with an agent server and a master. I've setup two servers with puppet and puppet master respectively. After the following setup of both servers: puppet master --no-daemonize --verbose puppet agent --test puppet cert --list to get the list, puppet cert --sign to sign it. puppet agent --test I get the message: err: Could not retrieve catalog from remote server: hostname was not match with the server certificate warning: Not using cache on failed catalog err: Could not retrieve catalog; skipping run err: Could not send report: hostname was not match with the server certificate What do I need to do in order to get the agent/master to be able to talk to each other?

    Read the article

< Previous Page | 776 777 778 779 780 781 782 783 784 785 786 787  | Next Page >