Search Results

Search found 7175 results on 287 pages for 'job hunting'.

Page 247/287 | < Previous Page | 243 244 245 246 247 248 249 250 251 252 253 254  | Next Page >

  • About Interview structure for test automation lab developers

    - by Ikaso
    Hi, I am interviewing new applicants for a team that is doing test automation on our company product(s). The team is composed of junior software developers and a team leader. The product runs on windows and has both managed and unmanaged parts. The test automation is done on both client side (user mode and kernel mode) and server side (IIS, Windows Services, backend). We are doing mainly intergration tests and black box tests. I am trying to figure out how to organize my interview. My overall idea is to ask about a project they have done, then ask some technical questions (multithreading, GC, design patterns) and one programming question. Please note that there is another interview done before me with 2 programming questions. My programming question is rather simple (for example: reversing a singly-linked linked list). My coworkers think that my questions will not find good developers since my questions are rather simple and well known, but so far most of the applicants fail those questions. My questions are: Should I change the structure of my interview for this kind of job? What questions do you ask to figure our if the applicant is test oriented? (Maybe I should provide a buggy implementation of a problem and let them find the bugs and then ask them about what tests they would have done) Regards,

    Read the article

  • Trouble accessing data in relationships in Ember

    - by user3618430
    I'm having trouble saving data in this model relationship. My models are as follows: App.Flow = DS.Model.extend({ title: DS.attr('string'), content: DS.attr('string'), isCustom: DS.attr('boolean'), params: DS.hasMany('parameter', {async: true}) }); App.Parameter = DS.Model.extend({ flow: DS.belongsTo('flow'), param: DS.attr('string'), param_tooltip: DS.attr('string'), param_value: DS.attr('string') }); As you can see, I want Flows to have multiple Parameters. I have a rudimentary setup using Flow and Parameter fixtures, which behave as expected in the templates. However, when I try to create new ones in the controller, I have trouble setting the flow and parameter values correctly. var p = this.store.createRecord('parameter', { param: "foo", param_tooltip: "world", param_value: "hello" }); var f = this.store.createRecord('flow', { title: 'job', content: title, isCustom: true, params: [p] // doesn't seem to work }); f.set('params', [p]); // doesn't seem to work p.set('flow', f); // also doesn't seem to work // Save the new model p.save(); f.save(); I've tried a lot of solutions after staring at this and StackOverflow for a while (not just the ones listed). I'm not really sure what to try next. One thing that I noticed in the Ember inspector was that the ids of these created elements were not integers (they were something like the string 'fixture_0'), but I'm not really sure why that would be, whether its related, or how to fix it. Thanks!

    Read the article

  • How to delete duplicate/aggregate rows faster in a file using Java (no DB)

    - by S. Singh
    I have a 2GB big text file, it has 5 columns delimited by tab. A row will be called duplicate only if 4 out of 5 columns matches. Right now, I am doing dduping by first loading each coloumn in separate List , then iterating through lists, deleting the duplicate rows as it encountered and aggregating. The problem: it is taking more than 20 hours to process one file. I have 25 such files to process. Can anyone please share their experience, how they would go about doing such dduping? This dduping will be a throw away code. So, I was looking for some quick/dirty solution, to get job done as soon as possible. Here is my pseudo code (roughly) Iterate over the rows i=current_row_no. Iterate over the row no. i+1 to last_row if(col1 matches //find duplicate && col2 matches && col3 matches && col4 matches) { col5List.set(i,get col5); //aggregate } Duplicate example A and B will be duplicate A=(1,1,1,1,1), B=(1,1,1,1,2), C=(2,1,1,1,1) and output would be A=(1,1,1,1,1+2) C=(2,1,1,1,1) [notice that B has been kicked out]

    Read the article

  • Algorithm complexity question

    - by Itsik
    During a recent job interview, I was asked to give a solution to the following problem: Given a string s (without spaces) and a dictionary, return the words in the dictionary that compose the string. For example, s= peachpie, dic= {peach, pie}, result={peach, pie}. I will ask the the decision variation of this problem: if s can be composed of words in the dictionary return yes, otherwise return no. My solution to this was in backtracking (written in Java) public static boolean words(String s, Set<String> dictionary) { if ("".equals(s)) return true; for (int i=0; i <= s.length(); i++) { String pre = prefix(s,i); // returns s[0..i-1] String suf = suffix(s,i); // returns s[i..s.len] if (dictionary.contains(pre) && words(suf, dictionary)) return true; } return false; } public static void main(String[] args) { Set<String> dic = new HashSet<String>(); dic.add("peach"); dic.add("pie"); dic.add("1"); System.out.println(words("peachpie1", dic)); // true System.out.println(words("peachpie2", dic)); // false } What is the time complexity of this solution? I'm calling recursively in the for loop, but only for the prefix's that are in the dictionary. Any idea's?

    Read the article

  • Compromising design & code quality to integrate with existing modules

    - by filip-fku
    Greetings! I inherited a C#.NET application I have been extending and improving for a while now. Overall it was obviously a rush-job (or whoever wrote it was seemingly less competent than myself). The app pulls some data from an embedded device & displays and manipulates it. At the core is a communications thread in the main application form which executes a 600+ lines of code method which calls functions all over the place, implementing a state machine - lots of if-state-then-do type code. Interaction with the device is done by setting the state/mode globally and letting the thread do it's thing. (This is just one example of the badness of the code - overall it is not very OO-like, it reminds of the style of embedded C code the device firmware is written in). My problem is that this piece of code is central to the application. The software, communications protocol or device firmware are not documented at all. Obviously to carry on with my work I have to interact with this code. What I would like some guidance on, is whether it is worth scrapping this code & trying to piece together something more reasonable from the information I can reverse engineer? I can't decide! The reason I don't want to refactor is because the code already works, and changing it will surely be a long, laborious and unpleasant task. On the flip side, not refactoring means I have to sometimes compromise the design of other modules so that I may call my code from this state machine! I've heard of "If it ain't broke don't fix it!", so I am wondering if it should apply when "it" is influencing the design of future code! Any advice would be appreciated! Thanks!

    Read the article

  • WSDL first for existing service layer

    - by Jurgen H
    I am working on an existing Java project with a typical services - dao setup for which only a webapplication was available. My job is to add webservices on top of the services layer, but the webservices have their own functional analysis and datamodel. The functional analyses ofcource focuses on what is possible in the different service methods. As good practice demands, we used the WSDL first strategy and generated JAXB bound Java classes and a SEI for the webservices. After having implemented the webservices partially, we noticed a 70% match between the datamodel. This resulted in writing converters which take the webservice JAXB classes and map them with the service layer classes. Customer customer = new Customer(); customer.setName(wsCustomer.getName()); customer.setFirstName(wsCustomer.getFirstName(); .. This is a very obvious example, some other mappings where little more complicated. Can anyone give his best practices, experiences, solutions to this kind of situations? Are any of these frameworks usefull? http://transmorph.sourceforge.net/wiki/index.php/Main_Page http://ezmorph.sourceforge.net/ Please don't start a discussion about WSDL first vs code first.

    Read the article

  • Is there really such a thing as "being good at math"?

    - by thezhaba
    Aside from gifted individuals able to perform complex calculations in their head, I'm wondering if proficiency in mathematics, namely calculus and algebra, has really got to do with one's natural inclination towards sciences, if you can put it that way. A number of students in my calculus course pick up material in seemingly no time whereas I, personally, have to spend time thinking about and understanding most concepts. Even then, if a question that requires a bit more 'imagination' comes up I don't always recognize the concepts behind it, as is the case with calculus proofs, for instance. Nevertheless, I refuse to believe that I'm simply not made for it. I do very well in programming and software engineering courses where a lot of students struggle. At first I could not grasp what they found to be so difficult, but eventually I realized that having previous programming experience is a great asset -- once I've seen and made practical use of the programming concepts learning about them in depth in an academic setting became much easier as I have then already seen their use "in the wild". I suppose I'm hoping that something similar happens with mathematics -- perhaps once the practical idea behind a concept (which authors of textbooks sure do a great job of concealing..) is evident, understanding the seemingly dry and symbolic ideas and proofs would be more obvious? I'm really not sure. All I'm sure of is I'd like to get better at calculus, but I don't yet understand why some of us pick it up easily while others have to spend considerable amounts of time on it and still not have complete understanding if an unusual problem is given.

    Read the article

  • minifying final html output using regex with codeigniter

    - by Aman
    Google pages suggest you to minify html i.e. remove all the un-necessary spaces. Codeigniter does have feature of giziping output or it can be done via .htaccess. But still I also would like to remove un-necessary spaces from final html output as well. I played a bit with this peace of code to do it, and it seem to work. This does indeed result in html that is without excess spaces and removes other tab formatting. class Welcome extends CI_Controller { function _output() { echo preg_replace('!\s+!', ' ', $output); } function index(){ ... } } Now the problem with this is there may be tag like <pre>,<textarea>, etc.. which may have space in it and regx should remove them. So, how do I remove excess space from final html, without effecting spaces or formatting for these certain tags using regx? Thanks to @Alan Moore got the answer, this worked for me echo preg_replace('#(?ix)(?>[^\S ]\s*|\s{2,})(?=(?:(?:[^<]++|<(?!/?(?:textarea|pre)\b))*+)(?:<(?>textarea|pre)\b|\z))#', ' ', $output); @ridgerunner here did very good job of analyzing this regx, ended up using his solution. Cheers to ridgerunner.

    Read the article

  • Formatting data for printing automatically

    - by 0bytes
    I have a requirement to retrieve data, format it to mimic an old request format we've used for years and then send it via IP address to any of a number of printers. Gathering the data and selecting the printer is no problem. I need to format the output for the printer and I'm just not sure what's best. The requirement is that the end users not have any interaction with the print request that's generated; only the intended recipients of the request job will know of the printed (& in a future release e-mailed) request. We also a requirement for a future update that we will have to incorporate into this solution an option to change the config in the DB so that each recipient site can choose printing or e-mail notification so I expect that I'll need to keep the control in a console app or webservice. I have the data and I can send it to the printers or generate an HTML-formatted e-mail easily; I just need to format it for the autoprinting. I don't think SSRS will help because I don't know of a way to designate a printer when running a report. Thanks for all help & suggestions.

    Read the article

  • Give the mount point of a path

    - by Charles Stewart
    The following, very non-robust shell code will give the mount point of $path: (for i in $(df|cut -c 63-99); do case $path in $i*) echo $i;; esac; done) | tail -n 1 Is there a better way to do this? Postscript This script is really awful, but has the redeeming quality that it Works On My Systems. Note that several mount points may be prefixes of $path. Examples On a Linux system: cas@txtproof:~$ path=/sys/block/hda1 cas@txtproof:~$ for i in $(df -a|cut -c 57-99); do case $path in $i*) echo $i;; esac; done| tail -1 /sys On a Mac osx system cas local$ path=/dev/fd/0 cas local$ for i in $(df -a|cut -c 63-99); do case $path in $i*) echo $i;; esac; done| tail -1 /dev Note the need to vary cut's parameters, because of the way df's output differs: indeed, awk is better. Answer It looks like munging tabular output is the only way within the shell, but df /dev/fd/impossible | tail -1 | awk '{ print $NF}' is a big improvement on what I had. Note two differences in semantics: firstly, df $path insists that $path names an existing file, the script I had above doesn't care; secondly, there are no worries about dereferncing symlinks. It's not difficult to write Python code to do the job.

    Read the article

  • parsing urls from windows batch file

    - by modest
    I have a text file (myurls.txt) whose contents are a list of URLs as follow: Slides_1: http://linux.koolsolutions.com/svn/ProjectA/tags/REL-1.0 Exercise_1: http://linux.koolsolutions.com/svn/ProjectA/tags/REL-1.0 Slides_2: http://linux.koolsolutions.com/svn/oldproject/ProjectB/tags/REL-2.0 Exercise_2: http://linux.koolsolutions.com/svn/ProjectB/tags/REL-1.0 Exercise_3: http://linux.koolsolutions.com/svn/BlueBook/ProjectA/tags/REL-1.0 Now I want to parse this text file in a for loop such that after each iteration (for e.g. take the first url from the above file) I have the following information into different variables: %i% = REL-1.0 %j% = http://linux.koolsolutions.com/svn/ProjectA %k% = http://linux.koolsolutions.com/svn/ProjectA/tags/REL-1.0 After some experiment I have the following code but it only works (kind of) if the URLs have same number of slashes: @echo off set FILE=myurls.txt FOR /F "tokens=2-9 delims=/ " %%i in (%FILE%) do ( @REM <do something with variables i, j and k.> ) I am fine with other solutions like for e.g. using Windows Script Host/VBS script as long as it can run with a default Windows XP/7 installation. In other words, I know I can use awk, grep, sed, python, etc. for Windows and get the job done but I don't want the users to have to install anything besides a standard windows installation.

    Read the article

  • Snapshot agent obliterates conflicts

    - by mwolfe02
    We are using merge replication in SQL Server 2000. We have a snapshot agent that runs every night that updates the publication snapshot. About six months ago we updated from SQL Server 7.0 to 2000 (that's not a typo). We noticed a sharp decline in conflicts at that time but could not track down the reason. We finally found that the daily snapshot agent is recreating the conflict tables every night. This seems to be a change in functionality from SQL Server 7.0. We were running the snapshot agent before and the conflicts would accumulate. Is there some way to prevent the data in the conflict tables from being lost when the snapshot runs? Can anyone confirm a change in behavior between 7.0 and 2000? Our current plan is to simply stop automatically updating the publication snapshot. Is that a reasonable workaround? Here is the line from the script that is adding the snapshot: exec sp_addpublication_snapshot @publication = N'MyPub' , @frequency_type = 4 , @frequency_interval = 1 , @frequency_relative_interval = 1 , @frequency_recurrence_factor = 0 , @frequency_subday = 1 , @frequency_subday_interval = 5 , @active_start_date = 0 , @active_end_date = 0 , @active_start_time_of_day = 500 , @active_end_time_of_day = 235959 Here is the step that runs in the agent job: Step Name: Run agent. Type: Replication Snapshot Command: -Publisher [WCDBS02] -PublisherDB [TaxDB] -Distributor [WCDBS02] -Publication [TaxDB] -ReplicationType 2 -DistributorSecurityMode 1 This appears to be running the Replication Snapshot Agent Utility. There is no mention on that link about dropping and recreating system conflict tables, nor is there any flag that can be set to alter this behavior.

    Read the article

  • Checking when two headers are included at the same time.

    - by fortran
    Hi, I need to do an assertion based on two related macro preprocessor #define's declared in different header files... The codebase is huge and it would be nice if I could find a place to put the assertion where the two headers are already included, to avoid polluting namespaces unnecessarily. Checking just that a file includes both explicitly might not suffice, as one (or both) of them might be included in an upper level of a nesting include's hierarchy. I know it wouldn't be too hard to write an script to check that, but if there's already a tool that does the job, the better. Example: file foo.h #define FOO 0xf file bar.h #define BAR 0x1e I need to put somewhere (it doesn't matter a lot where) something like this: #if (2*FOO) != BAR #error "foo is not twice bar" #endif Yes, I know the example is silly, as they could be replaced so one is derived from the other, but let's say that the includes can be generated from different places not under my control and I just need to check that they match at compile time... And I don't want to just add one include after the other, as it might conflict with previous code that I haven't written, so that's why I would like to find a file where both are already present. In brief: how can I find a file that includes (direct or indirectly) two other files? Thanks!

    Read the article

  • Instrumenting a string

    - by George Polevoy
    Somewhere in C++ era i have crafted a library, which enabled string representation of the computation history. Having a math expression like: TScalar Compute(TScalar a, TScalar b, TScalar c) { return ( a + b ) * c; } I could render it's string representation: r = Compute(VerbalScalar("a", 1), VerbalScalar("b", 2), VerbalScalar("c", 3)); Assert.AreEqual(9, r.Value); Assert.AreEqual("(a+b)*c==(1+2)*3", r.History ); C++ operator overloading allowed for substitution of a simple type with a complex self-tracking entity with an internal tree representation of everything happening with the objects. Now i would like to have the same possibility for NET strings, only instead of variable names i would like to see a stack traces of all the places in code which affected a string. And i want it to work with existing code, and existing compiled assemblies. Also i want all this to hook into visual studio debugger, so i could set a breakpoint, and see everything that happened with a string. Which technology would allow this kind of things? I know it sound like an utopia, but I think visual studio code coverage tools actually do the same kind of job while instrumenting the assemblies.

    Read the article

  • What's best choice career-wise, to know a little about a lot or a lot about a little?

    - by nimo
    I work as a developer at a rather small company and we are providing a web application that is used by a big base of customers. Because we are so small everyone have to be able to do a lot of different tasks. It ranges from advanced support, developing the product (programming: c/c++, c#, php, sql, javascript, html, css), handle network configuration and network related issues and even sometimes go on sales meetings with potential customers. My concern is that I don't really specialize in any specific area. I know and learn little about a lot. I have graduated from school two years ago and this is my first real employment and when I look at other positions out there they always require so and so many years of experience in a specific area (for example 5 years of C#). For me to get that kind of specialized experience will be really hard at my current job. My question for you is what is, in your opinion, best choice career-wise, to know a little about a lot or a lot about a little? What path did you take? pros and cons that comes with that choice.

    Read the article

  • Building an 'Activation Key' Generator in JAVA

    - by jax
    I want to develop a Key generator for my phone applications. Currently I am using an external service to do the job but I am a little concerned that the service might go offline one day hence I will be in a bit of a pickle. How authentication works now. Public key stored on the phone. When the user requests a key the 'phone ID' is sent to the "Key Generation Service" and the encrypted key key is returned and stored inside a license file. On the phone I can check if the key is for the current phone by using a method getPhoneId() which I can check with the the current phone and grant or not grant access to features. I like this and it works well, however, I want to create my own "Key Generation Service" from my own website. Requirements: Public and Private Key Encryption:(Bouncy Castle) Written in JAVA Must support getApplicationId() (so that many applications can use the same key generator) and getPhoneId() (to get the phone id out of the encrypted license file) I want to be able to send the ApplicationId and PhoneId to the service for license key generation. Can someone give me some pointers on how to accomplish this? I have dabbled around with some java encryption but am definitely no expert and can't find anything that will help me. A list of the Java classes I would need to instantiate would be helpful.

    Read the article

  • How to rename many files url escaped (%XX) to human readable form

    - by F. Hauri
    I have downloaded a lot of files in one directory, but many of them are stored with URL escaped filename, containing sign percents folowed by two hexadecimal chars, like: ls -ltr $HOME/Downloads/ -rw-r--r-- 2 user user 13171425 24 nov 10:07 Swisscom%20Mobile%20Unlimited%20Kurzanleitung-%282011-05-12%29.pdf -rw-r--r-- 2 user user 1525794 24 nov 10:08 31010ENY-HUAWEI%20E173u-1%20HSPA%20USB%20Stick%20Quick%20Start-%28V100R001_01%2CEnglish%2CIndia-Reliance%2CC%2Ccolor%29.pdf ... All theses names match the following form whith exactly 3 parts: Name of the object -( Revision, and/or Date, useless ... ). Extension In same command, I would like to obtain unde My goal is to having one command to rename all this files to obtain: -rw-r--r-- 2 user user 13171425 24 nov 10:07 Swisscom_Mobile_Unlimited_Kurzanleitung.pdf -rw-r--r-- 2 user user 1525794 24 nov 10:08 31010ENY-HUAWEI_E173u-1_HSPA_USB_Stick_Quick_Start.pdf I've successfully do the job in full bash with: urlunescape() { local srce="$1" done=false part1 newname ext while ! $done ;do part1="${srce%%%*}" newname="$part1\\x${srce:${#part1}+1:2}${srce:${#part1}+3}" [ "$part1" == "$srce" ] && done=true || srce="$newname" done newname="$(echo -e $srce)" ext=${newname##*.} newname="${newname%-(*}" echo ${newname// /_}.$ext } for file in *;do mv -i "$file" "$(urlunescape "$file")" done ls -ltr -rw-r--r-- 2 user user 13171425 24 nov 10:07 Swisscom_Mobile_Unlimited_Kurzanleitung.pdf -rw-r--r-- 2 user user 1525794 24 nov 10:08 31010ENY-HUAWEI_E173u-1_HSPA_USB_Stick_Quick_Start.pdf or using sed, tr, bash ... and sed: for file in *;do echo -e $( echo $file | sed 's/%\(..\)/\\x\1/g' ) | sed 's/-(.*\.\([^\.]*\)$/.\1/' | tr \ \\n _\\0 | xargs -0 mv -i "$file" done ls -ltr -rw-r--r-- 2 user user 13171425 24 nov 10:07 Swisscom_Mobile_Unlimited_Kurzanleitung.pdf -rw-r--r-- 2 user user 1525794 24 nov 10:08 31010ENY-HUAWEI_E173u-1_HSPA_USB_Stick_Quick_Start.pdf But, I'm sure, there must exist simplier and/or shorter way to do this.

    Read the article

  • Publish Git repository to SVN

    - by Ken Williams
    I and my small team work in Git, and the larger group uses Subversion. I'd like to schedule a cron job to publish our repositories current HEADs every hour into a certain directory in the SVN repo. I thought I had this figured out, but the recipe I wrote down previously doesn't seem to be working now: git clone ssh://me@gitserver/git-repo/Projects/ProjX px2 cd px2 svn mkdir --parents http://me@svnserver/svn/repo/play/me/fromgit/ProjX git svn init -s http://me@svnserver/svn/repo/play/me/fromgit/ProjX git svn fetch git rebase trunk master git svn dcommit Here's what happens when I attempt: % git clone ssh://me@gitserver/git-repo/Projects/ProjX px2 Cloning into 'ProjX'... ... % cd px2 % svn mkdir --parents http://me@svnserver/svn/repo/play/me/fromgit/ProjX Committed revision 123. % git svn init -s http://me@svnserver/svn/repo/play/me/fromgit/ProjX Using higher level of URL: http://me@svnserver/svn/repo/play/me/fromgit/ProjX => http://me@svnserver/svn/repo % git svn fetch W: Ignoring error from SVN, path probably does not exist: (160013): Filesystem has no item: File not found: revision 100, path '/play/me/fromgit/ProjX' W: Do not be alarmed at the above message git-svn is just searching aggressively for old history. This may take a while on large repositories % git rebase trunk master fatal: Needed a single revision invalid upstream trunk I could have sworn this worked previously, anyone have any suggestions? Thanks.

    Read the article

  • Replacing words in string

    - by abkai
    Okay, so I have the following little function: def swap(inp): inp = inp.split() out = "" for item in inp: ind = inp.index(item) item = item.replace("i am", "you are") item = item.replace("you are", "I am") item = item.replace("i'm", "you're") item = item.replace("you're", "I'm") item = item.replace("my", "your") item = item.replace("your", "my") item = item.replace("you", "I") item = item.replace("my", "your") item = item.replace("i", "you") inp[ind] = item for item in inp: ind = inp.index(item) item = item + " " inp[ind] = item return out.join(inp) Which, while it's not particularly efficient gets the job done for shorter sentences. Basically, all it does is swaps pronoun etc. perspectives. This is fine when I throw a string like "I love you" at it, it returns "you love me" but when I throw something like: you love your version of my couch because I love you, and you're a couch-lover. I get: I love your versyouon of your couch because I love I, and I'm a couch-lover. I'm confused as to why this is happening. I explicitly split the string into a list to avoid this. Why would it be able to detect it as being a part of a list item, rather than just an exact match? Also, slightly deviating to avoid having to post another question so similar; if a solution to this breaks this function, what will happen to commas, full stops, other punctuation? It made some very surprising mistakes. My expected output is: I love my version of your couch because you love I, and I'm a couch-lover. The reason I formatted it like this, is because I eventually hope to be able to replace the item.replace(x, y) variables with words in a database.

    Read the article

  • Designing model/database for distance between any two locations (that may change)

    - by Yo Ludke
    We should create a web app which has a number of events each with a location (created as user-generated content, so the number of events will be increasingly large). The distance between any events should be available, for example to determine the top 5 closest events and such things. Users may change the locations of events. How should one design the database/model for this (in a scalable way)? I was thinking of doing it with a "distance table" (like so http://www.deutschland-tourist.info/images/entfernungstabelle.gif). Then every time, if a location changes, one row and one column have to be recalculated (this should be done with a delayed job, because it is not important to have the changes instantly). Possible problems in Scaling: Database to large (n² items for n events), too much calculation to be done. For example we should see if this is okay for 10.000 users. If each has created just one event, then this would be 100 million integers... Do you think this would be a good way to do it efficiently? How could one realize such a distance table with an rails model? Is it possible with a SQL databse? Would you start other approaches?

    Read the article

  • Hopping from a C++ to a Perl Unix profile?

    - by rocknroll
    Hi all, I have been a C++,Linux Developer till now and I am adept in this stack. Off late I have been getting opportunities that require Perl,Unix (with knowledge of C++,shell scripting) expertise. Organisations are showing interest even thought I don't have much scripting experience to boast off. The roll is more in a Support,maintenance project involving SQL as well. Off late I am in a fix whether to forgo these offers or not. I don't know the dynamics of an IT organisation and thus on one hand I fear that my C++ experience will be nullified and on the positive side I am getting to work on a new technology stack which will only add to my skill set. I am sure, most of you at some point of time have encountered such dilemmas and would have taken some decision. I want you to share your perspectives on such a scenario where a person is required to change his/her technology stack when changing his/her job. What are the merits and demerits in going with either of the choices? Also I know that C++ isn't going anywhere in the near future. What about perl? I have no clue as to what the future holds for perl developer? Whether there are enough opportunities for a perl developer? I am asking this question here because most of my fellow programmers face this career choice dilemma. Thanks.

    Read the article

  • I cant figure out my PHP problem. Can anyone with PHP codes?

    - by Jeffery
    when I click the submit button it gives me an error page. Here is the site http://nealconstruction.com/estimate.html $emailSubject = 'Estimate' $webMaster = '[email protected]' /* Gathering Info */ $emailField = $_POST ['email']; $nameField = $_POST ['name']; $phoneField = $_POST ['phone']; $typeField = $_POST ['type']; $locationField = $_POST ['location']; $infoField = $_POST ['info']; $contactField = $_POST ['contact']; $body = <<<EOD Email: $email Name: $name Phone Number: $phone Type Of Job: $type Location: $location Additional Info: $info How to Contact: $contact EOD; $headers = "From: $email\r\n"; $headers .= "Content-Type: text/html\r\n"; $success = mail($webMaster; $emailSubject; $body; $headers); /* Results rendered as html */ $theResults = << JakesWorks - travel made easy-Homepage Thank you for your information! We will contact you very soon! EOD; echo "$theResults"; ?

    Read the article

  • Android opengl releasing textures

    - by user1642418
    I have a bit of a problem. I am developing a game for android + engine and I got stuck. I am getting OpenGL out of memory error and either app crashes or phone hangs after loading a scene multiple times. For example: app launches, shows main menu, 1st level/scene is loaded. Then I go back to main menu, and repeat. It doesnt matter which scene I load, after 4-6 times the error occurs. Some background: Each time when scene is loaded all the resources are released and upon first frame render - needed stuff gets loaded. The performance is more or less ok. Note that I am calling glDeleteTexture method, but I think its not doing its job and releasing memory. Thing is that -when I minimize and open it again - problem doesn't occur, but almost the same things are executed. Problem doesn't occur. This way android releases memory. How do I release/get rid of unused textures properly? This happens on HTC Desire HD ( ice cream sandwich 4.0.4) . Other games works fine, so I bet this is not the problem in ROM.

    Read the article

  • Unit Testing the Use of TransactionScope

    - by Randolpho
    The preamble: I have designed a strongly interfaced and fully mockable data layer class that expects the business layer to create a TransactionScope when multiple calls should be included in a single transaction. The problem: I would like to unit test that my business layer makes use of a TransactionScope object when I expect it to. Unfortunately, the standard pattern for using TransactionScope is a follows: using(var scope = new TransactionScope()) { // transactional methods datalayer.InsertFoo(); datalayer.InsertBar(); scope.Complete(); } While this is a really great pattern in terms of usability for the programmer, testing that it's done seems... unpossible to me. I cannot detect that a transient object has been instantiated, let alone mock it to determine that a method was called on it. Yet my goal for coverage implies that I must. The Question: How can I go about building unit tests that ensure TransactionScope is used appropriately according to the standard pattern? Final Thoughts: I've considered a solution that would certainly provide the coverage I need, but have rejected it as overly complex and not conforming to the standard TransactionScope pattern. It involves adding a CreateTransactionScope method on my data layer object that returns an instance of TransactionScope. But because TransactionScope contains constructor logic and non-virtual methods and is therefore difficult if not impossible to mock, CreateTransactionScope would return an instance of DataLayerTransactionScope which would be a mockable facade into TransactionScope. While this might do the job it's complex and I would prefer to use the standard pattern. Is there a better way?

    Read the article

  • How to solve this problem with Python

    - by morpheous
    I am "porting" an application I wrote in C++ into Python. This is the current workflow: Application is started from the console Application parses CLI args Application reads an ini configuration file which specifies which plugins to load etc Application starts a timer Application iterates through each loaded plugin and orders them to start work. This spawns a new worker thread for the plugin The plugins carry out their work and when completed, they die When time interval (read from config file) is up, steps 5-7 is repeated iteratively Since I am new to Python (2 days and counting), the distinction between script, modules and packages are still a bit hazy to me, and I would like to seek advice from Pythonista as to how to implement the workflow described above, using Python as the programing language. In order to keep things simple, I have decided to leave out the time interval stuff out, and instead run the python script/scripts as a cron job instead. This is how I am thinking of approaching it: Encapsulate the whole application in a package which is executable (i.e. can be run from the command line with arguments. Write the plugins as modules (I think maybe its better to implement each module in a separate file?) I havent seen any examples of using threading in Python yet. Could someone provide a snippet of how I could spawn a thread to run a module. Also, I am not sure how to implement the concept of plugins in Python - any advice would be helpful - especially with a code snippet.

    Read the article

< Previous Page | 243 244 245 246 247 248 249 250 251 252 253 254  | Next Page >