Search Results

Search found 14679 results on 588 pages for 'index merge'.

Page 197/588 | < Previous Page | 193 194 195 196 197 198 199 200 201 202 203 204  | Next Page >

  • Ignoring focusLost(), SWT.Verify, or other SWT listeners in Java code.

    - by Zoot
    Outside of the actual SWT listener, is there any way to ignore a listener via code? For example, I have a java program that implements SWT Text Widgets, and the widgets have: SWT.Verify listeners to filter out unwanted text input. ModifyListeners to wait for the correct number of valid input characters and automatically set focus (using setFocus())to the next valid field, skipping the other text widgets in the tab order. focusLost(FocusEvent) FocusListeners that wait for the loss of focus from the text widget to perform additional input verification and execute an SQL query based on the user input. The issue I run into is clearing the text widgets. One of the widgets has the format "####-##" (Four Numbers, a hyphen, then two numbers) and I have implemented this listener, which is a modified version of SWT Snippet Snippet179. The initial text for this text widget is " - " to provide visual feedback to the user as to the expected format. Only numbers are acceptable input, and the program automatically skips past the hyphen at the appropriate point. /* * This listener was adapted from the "verify input in a template (YYYY/MM/DD)" SWT Code * Snippet (also known as Snippet179), from the Snippets page of the SWT Project. * SWT Code Snippets can be found at: * http://www.eclipse.org/swt/snippets/ */ textBox.addListener(SWT.Verify, new Listener() { boolean ignore; public void handleEvent(Event e) { if (ignore) return; e.doit = false; StringBuffer buffer = new StringBuffer(e.text); char[] chars = new char[buffer.length()]; buffer.getChars(0, chars.length, chars, 0); if (e.character == '\b') { for (int i = e.start; i < e.end; i++) { switch (i) { case 0: /* [x]xxx-xx */ case 1: /* x[x]xx-xx */ case 2: /* xx[x]x-xx */ case 3: /* xxx[x]-xx */ case 5: /* xxxx-[x]x */ case 6: /* xxxx-x[x] */ { buffer.append(' '); break; } case 4: /* xxxx[-]xx */ { buffer.append('-'); break; } default: return; } } textBox.setSelection(e.start, e.start + buffer.length()); ignore = true; textBox.insert(buffer.toString()); ignore = false; textBox.setSelection(e.start, e.start); return; } int start = e.start; if (start > 6) return; int index = 0; for (int i = 0; i < chars.length; i++) { if (start + index == 4) { if (chars[i] == '-') { index++; continue; } buffer.insert(index++, '-'); } if (chars[i] < '0' || '9' < chars[i]) return; index++; } String newText = buffer.toString(); int length = newText.length(); textBox.setSelection(e.start, e.start + length); ignore = true; textBox.insert(newText); ignore = false; /* * After a valid key press, verifying if the input is completed * and passing the cursor to the next text box. */ if (7 == textBox.getCaretPosition()) { /* * Attempting to change the text after receiving a known valid input that has no results (0000-00). */ if ("0000-00".equals(textBox.getText())) { // "0000-00" is the special "Erase Me" code for these text boxes. ignore = true; textBox.setText(" - "); ignore = false; } // Changing focus to a different textBox by using "setFocus()" method. differentTextBox.setFocus(); } } } ); As you can see, the only method I've figured out to clear this text widget from a different point in the code is by assigning "0000-00" textBox.setText("000000") and checking for that input in the listener. When that input is received, the listener changes the text back to " - " (four spaces, a hyphen, then two spaces). There is also a focusLost Listener that parses this text widget for spaces, then in order to avoid unnecessary SQL queries, it clears/resets all fields if the input is invalid (i.e contains spaces). // Adding focus listener to textBox to wait for loss of focus to perform SQL statement. textBox.addFocusListener(new FocusAdapter() { @Override public void focusLost(FocusEvent evt) { // Get the contents of otherTextBox and textBox. (otherTextBox must be <= textBox) String boxFour = otherTextBox.getText(); String boxFive = textBox.getText(); // If either text box has spaces in it, don't perform the search. if (boxFour.contains(" ") || boxFive.contains(" ")) { // Don't perform SQL statements. Debug statement. System.out.println("Tray Position input contains spaces. Ignoring."); //Make all previous results invisible, if any. labels.setVisible(false); differentTextBox.setText(""); labelResults.setVisible(false); } else { //... Perform SQL statement ... } } } ); OK. Often, I use SWT MessageBox widgets in this code to communicate to the user, or wish to change the text widgets back to an empty state after verifying the input. The problem is that messageboxes seem to create a focusLost event, and using the .setText(string) method is subject to SWT.Verify listeners that are present on the text widget. Any suggestions as to selectively ignoring these listeners in code, but keeping them present for all other user input? Thank you in advance for your assistance.

    Read the article

  • Headaches using distributed version control for traditional teams?

    - by J Cooper
    Though I use and like DVCS for my personal projects, and can totally see how it makes managing contributions to your project from others easier (e.g. your typical Github scenario), it seems like for a "traditional" team there could be some problems over the centralized approach employed by solutions like TFS, Perforce, etc. (By "traditional" I mean a team of developers in an office working on one project that no one person "owns", with potentially everyone touching the same code.) A couple of these problems I've foreseen on my own, but please chime in with other considerations. In a traditional system, when you try to check your change in to the server, if someone else has previously checked in a conflicting change then you are forced to merge before you can check yours in. In the DVCS model, each developer checks in their changes locally and at some point pushes to some other repo. That repo then has a branch of that file that 2 people changed. It seems that now someone must be put in charge of dealing with that situation. A designated person on the team might not have sufficient knowledge of the entire codebase to be able to handle merging all conflicts. So now an extra step has been added where someone has to approach one of those developers, tell him to pull and do the merge and then push again (or you have to build an infrastructure that automates that task). Furthermore, since DVCS tends to make working locally so convenient, it is probable that developers could accumulate a few changes in their local repos before pushing, making such conflicts more common and more complicated. Obviously if everyone on the team only works on different areas of the code, this isn't an issue. But I'm curious about the case where everyone is working on the same code. It seems like the centralized model forces conflicts to be dealt with quickly and frequently, minimizing the need to do large, painful merges or have anyone "police" the main repo. So for those of you who do use a DVCS with your team in your office, how do you handle such cases? Do you find your daily (or more likely, weekly) workflow affected negatively? Are there any other considerations I should be aware of before recommending a DVCS at my workplace?

    Read the article

  • What do you do when practical problems get in the way of practical goals?

    - by P.Brian.Mackey
    UPDATE Source control is good to use. Sometimes, real world issues make it impractical to use. For example: If the team is not used to using source control, training problems can arise If a team member directly modifies code on the server, various issues can arise. Merge problems, lack of history, etc Let's say there's a project that is way out of sync. The physical files on the server differ in unknown ways over ~100 files. Merging would take not only a great knowledge of the project, but is also well beyond the ability to complete in the given time. Other projects are falling out of sync. Developers continue to have a distrust of source control and therefore compound the issue by not using source control. Developers argue that using source control is wasteful because merging is error prone and difficult. This is a difficult point to argue, because when source control is being so badly mis-used and source control continually bypassed, it is error prone indeed. Therefore, the evidence "speaks for itself" in their view. Developers argue that directly modifying source control saves time. This is also difficult to argue. Because the merge required to synchronize the code to start with is time consuming, across ~10 projects. Permanent files are often stored in the same directory as the web project. So publishing (full publish) erases these files that are not in source control. This also drives distrust for source control. Because "publishing breaks the project". Fixing this (moving stored files out of the solution subfolders) takes a great deal of time and debugging as these locations are not set in web.config and often exist across multiple code points. So, the culture persists itself. Bad practice begets more bad practice. Bad solutions drive new hacks to "fix" much deeper, much more time consuming problems. Servers, hard drive space are extremly difficult to come by. Yet, user expectations are rising. What can be done in this situation?

    Read the article

  • User jQuery to get nested elements from XML

    - by Dkong
    I'm spinning my wheels on this. How do I get the values from the following nested elements from the XML below (I've also put my code below)? I am after the "descShort" value and then the capital "Last" and capital "change" : <indices> <index> <code>DJI</code> <exchange>NYSE</exchange> <liveness>DELAYED</liveness> <indexDesc> <desc>Dow Jones Industrials</desc> <descAbbrev>DOW JONES</descAbbrev> <descShort>DOW JONES</descShort> <firstActive></firstActive> <lastActive></lastActive> </indexDesc> <indexQuote> <capital> <first>11144.57</first> <high>11153.79</high> <low>10973.92</low> <last>11018.66</last> <change>-125.9</change> <pctChange>-1.1%</pctChange> </capital> <gross> <first>11144.57</first> <high>11153.79</high> <low>10973.92</low> <last>11018.66</last> <change>-125.9</change> <pctChange>-1.1%</pctChange> </gross> <totalEvents>4</totalEvents> <lastChanged>16-Apr-2010 16:03:00</lastChanged> </indexQuote> </index> <index> <code>XAO</code> <exchange>ASX</exchange> <liveness>DELAYED</liveness> <indexDesc> <desc>ASX All Ordinaries</desc> <descAbbrev>All Ordinaries</descAbbrev> <descShort>ALL ORDS</descShort> <firstActive>06-Mar-1970</firstActive> <lastActive></lastActive> </indexDesc> <indexQuote> <capital> <first>5007.30</first> <high>5007.30</high> <low>4934.00</low> <last>4939.40</last> <change>-67.9</change> <pctChange>-1.4%</pctChange> </capital> <gross> <first>5007.30</first> <high>5007.30</high> <low>4934.00</low> <last>4939.40</last> <change>-67.9</change> <pctChange>-1.4%</pctChange> </gross> <totalEvents>997</totalEvents> <lastChanged>19-Apr-2010 17:02:54</lastChanged> </indexQuote> </index> $.ajax({ type: "GET", url: "stockindices.xml", dataType: "xml", success: function(xml) { $(xml).find('index').each(function(){ var self = $(this); var code = self.find('indexDesc'); $(code).find('indexDesc').each(function(){ alert(self.find('descShort').text()); }); $('<span class=\"tickerItem\"></span>').html(values[0].text()).appendTo('#marq'); }); } });

    Read the article

  • MVC Razor Engine For Beginners Part 1

    - by Humprey Cogay, C|EH, E|CSA
    I. What is MVC? a. http://www.asp.net/mvc/tutorials/older-versions/overview/asp-net-mvc-overview II. Software Requirements for this tutorial a. Visual Studio 2010/2012. You can get your free copy here Microsoft Visual Studio 2012 b. MVC Framework Option 1 - Install using a standalone installer http://www.microsoft.com/en-us/download/details.aspx?id=30683 Option 2 - Install using Web Platform Installer http://www.microsoft.com/web/handlers/webpi.ashx?command=getinstallerredirect&appid=MVC4VS2010_Loc III. Creating your first MVC4 Application a. On the Visual Studio click file new solution link b. Click Other Project Type>Visual Studio Solutions and on the templates window select blank solution and let us name our solution MVCPrimer. c. Now Click File>New and select Project d. Select Visual C#>Web> and select ASP.NET MVC 4 Web Application and Enter MyWebSite as Name e. Select Empty, Razor as view engine and uncheck Create a Unit test project f. You can now view a basic MVC 4 Application Structure on your solution explorer g. Now we will add our first controller by right clicking on the controllers folder on your solution explorer and select Add>Controller h. Change the name of the controller to HomeController and under the scaffolding options select Empty MVC Controller. i. You will now see a basic controller with an Index method that returns an ActionResult j. We will now add a new View Folder for our Home Controller. Right click on the views folder on your solution explorer and select Add> New Folder> and name this folder Home k. Add a new View by right clicking on Views>Home Folder and select Add View. l. Name the view Index, and select Razor(CSHTML) as View Engine, All checkbox should be unchecked for now and click add. m. Relationship between our HomeController and Home Views Sub Folder n. Add new HTML Contents to our newly created Index View o. Press F5 to run our MVC Application p. We will create our new model, Right click on the models folder of our solution explorer and select Add> Class. q. Let us name our class Customer r. Edit the Customer class with the following code s. Open the HomeController by double clickin HomeController of our Controllers folder and edit the HomeControllerusing System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.Mvc;   namespace MyWebSite.Controllers {     public class HomeController : Controller     {         //         // GET: /Home/           public ActionResult Index()         {             return View();         }           public ActionResult ListCustomers()         {             List<Models.Customer> customers = new List<Models.Customer>();               //Add First Customer to Our Collection             customers.Add(new Models.Customer()                     {                         Id = 1,                         CompanyName = "Volvo",                         ContactNo = "123-0123-0001",                         ContactPerson = "Gustav Larson",                         Description = "Volvo Car Corporation, or Volvo Personvagnar AB, is a Scandinavian automobile manufacturer founded in 1927"                     });                 //Add Second Customer to Our Collection             customers.Add(new Models.Customer()                     {                         Id = 2,                         CompanyName = "BMW",                         ContactNo = "999-9876-9898",                         ContactPerson = "Franz Josef Popp",                         Description = "Bayerische Motoren Werke AG,  (BMW; English: Bavarian Motor Works) is a " +                                       "German automobile, motorcycle and engine manufacturing company founded in 1917. "                     });                 //Add Third Customer to Our Collection             customers.Add(new Models.Customer()             {                 Id = 3,                 CompanyName = "Audi",                 ContactNo = "983-2222-1212",                 ContactPerson = "Karl Benz",                 Description = " is a multinational division of the German manufacturer Daimler AG,"             });               return View(customers);         }     } } t. Let us now create a view for this Class, But before continuing Press Ctrl + Shift + B to rebuild the solution, this will make the previously created model on the Model class drop down of the Add View Menu. Right click on the views>Home folder and select Add>View u. Let us name our View as ListCustomers, Select Razor(CSHTML) as View Engine, Put a check mark on Create a strongly-typed view, and select Customer (MyWebSite.Models) as model class. Slect List on the Scaffold Template and Click OK. v. Run the MVC Application by pressing F5, and on the address bar insert Home/ListCustomers, We should now see a web page similar below.   x. You can edit ListCustomers.CSHTML to remove and add HTML codes @model IEnumerable<MyWebSite.Models.Customer>   @{     Layout = null; }   <!DOCTYPE html>   <html> <head>     <meta name="viewport" content="width=device-width" />     <title>ListCustomers</title> </head> <body>     <h2>List of Customers</h2>     <table border="1">         <tr>             <th>                 @Html.DisplayNameFor(model => model.CompanyName)             </th>             <th>                 @Html.DisplayNameFor(model => model.Description)             </th>             <th>                 @Html.DisplayNameFor(model => model.ContactPerson)             </th>             <th>                 @Html.DisplayNameFor(model => model.ContactNo)             </th>         </tr>         @foreach (var item in Model) {         <tr>             <td>                 @Html.DisplayFor(modelItem => item.CompanyName)             </td>             <td>                 @Html.DisplayFor(modelItem => item.Description)             </td>             <td>                 @Html.DisplayFor(modelItem => item.ContactPerson)             </td>             <td>                 @Html.DisplayFor(modelItem => item.ContactNo)             </td>                   </tr>     }         </table> </body> </html> y. Press F5 to run the MVC Application   z. You will notice some @HTML.DisplayFor codes. These are called HTML Helpers you can read more about HTML Helpers on this site http://www.w3schools.com/aspnet/mvc_htmlhelpers.asp   That’s all. You now have your first MVC4 Razor Engine Web Application . . .

    Read the article

  • Which is graphical free diff tool i can use with mercurial and eclispe

    - by user1776347
    I am new to version control system and in my new job they are using mercurial. All other commnads are easy to type from linux but i get to problem with i have to do idff merge of documents with diffput and diffget. I am using the eclispe and have used the mercurial plugin for eclipse. So far its commiting the changes but i am not getting anything , where i can see the two files for merging. Any ideas

    Read the article

  • November EC Meeting Minutes and Materials

    - by Heather VanCura
    The JCP EC meeting minutes and materials from the EC only portion of the 20 November meeting are now available on the EC Meeting Summaries page. Agenda: Part 1: Private EC meeting at 2:00 pm PST [PMO Presentation] Roll call Agenda review EC meeting attendance report Personnel changes EC stats Election results 2013 meeting planning JSR 358 Expert Group session Part 2: Public EC meeting at 3:00 pm PST [PMO presentation] Election results and the EC merge JSR 358 status report JCP 2.8 status update and community audit program - Heather VanCura Discussion/Q&A

    Read the article

  • Is legal to modifying a MIT licensed code and sale?

    - by Alper
    I have a project and i want to use a ready-made script in my project that is licensed under MIT, but using this script seperately will be redundant. So i've decided to merge my codes and MIT licensed script in the same file. ( Let's say i'll modify / improve / add new features to it) I'm planning to sell this work on a market but is it fair (legally)? NOTE: Meanwhile, I'll put (refer) MIT Licensed script's copyright already in the final file.

    Read the article

  • Optimizing python code performance when importing zipped csv to a mongo collection

    - by mark
    I need to import a zipped csv into a mongo collection, but there is a catch - every record contains a timestamp in Pacific Time, which must be converted to the local time corresponding to the (longitude,latitude) pair found in the same record. The code looks like so: def read_csv_zip(path, timezones): with ZipFile(path) as z, z.open(z.namelist()[0]) as input: csv_rows = csv.reader(input) header = csv_rows.next() check,converters = get_aux_stuff(header) for csv_row in csv_rows: if check(csv_row): row = { converter[0]:converter[1](value) for converter, value in zip(converters, csv_row) if allow_field(converter) } ts = row['ts'] lng, lat = row['loc'] found_tz_entry = timezones.find_one(SON({'loc': {'$within': {'$box': [[lng-tz_lookup_radius, lat-tz_lookup_radius],[lng+tz_lookup_radius, lat+tz_lookup_radius]]}}})) if found_tz_entry: tz_name = found_tz_entry['tz'] local_ts = ts.astimezone(timezone(tz_name)).replace(tzinfo=None) row['tz'] = tz_name else: local_ts = (ts.astimezone(utc) + timedelta(hours = int(lng/15))).replace(tzinfo = None) row['local_ts'] = local_ts yield row def insert_documents(collection, source, batch_size): while True: items = list(itertools.islice(source, batch_size)) if len(items) == 0: break; try: collection.insert(items) except: for item in items: try: collection.insert(item) except Exception as exc: print("Failed to insert record {0} - {1}".format(item['_id'], exc)) def main(zip_path): with Connection() as connection: data = connection.mydb.data timezones = connection.timezones.data insert_documents(data, read_csv_zip(zip_path, timezones), 1000) The code proceeds as follows: Every record read from the csv is checked and converted to a dictionary, where some fields may be skipped, some titles be renamed (from those appearing in the csv header), some values may be converted (to datetime, to integers, to floats. etc ...) For each record read from the csv, a lookup is made into the timezones collection to map the record location to the respective time zone. If the mapping is successful - that timezone is used to convert the record timestamp (pacific time) to the respective local timestamp. If no mapping is found - a rough approximation is calculated. The timezones collection is appropriately indexed, of course - calling explain() confirms it. The process is slow. Naturally, having to query the timezones collection for every record kills the performance. I am looking for advises on how to improve it. Thanks. EDIT The timezones collection contains 8176040 records, each containing four values: > db.data.findOne() { "_id" : 3038814, "loc" : [ 1.48333, 42.5 ], "tz" : "Europe/Andorra" } EDIT2 OK, I have compiled a release build of http://toblerity.github.com/rtree/ and configured the rtree package. Then I have created an rtree dat/idx pair of files corresponding to my timezones collection. So, instead of calling collection.find_one I call index.intersection. Surprisingly, not only there is no improvement, but it works even more slowly now! May be rtree could be fine tuned to load the entire dat/idx pair into RAM (704M), but I do not know how to do it. Until then, it is not an alternative. In general, I think the solution should involve parallelization of the task. EDIT3 Profile output when using collection.find_one: >>> p.sort_stats('cumulative').print_stats(10) Tue Apr 10 14:28:39 2012 ImportDataIntoMongo.profile 64549590 function calls (64549180 primitive calls) in 1231.257 seconds Ordered by: cumulative time List reduced from 730 to 10 due to restriction <10> ncalls tottime percall cumtime percall filename:lineno(function) 1 0.012 0.012 1231.257 1231.257 ImportDataIntoMongo.py:1(<module>) 1 0.001 0.001 1230.959 1230.959 ImportDataIntoMongo.py:187(main) 1 853.558 853.558 853.558 853.558 {raw_input} 1 0.598 0.598 370.510 370.510 ImportDataIntoMongo.py:165(insert_documents) 343407 9.965 0.000 359.034 0.001 ImportDataIntoMongo.py:137(read_csv_zip) 343408 2.927 0.000 287.035 0.001 c:\python27\lib\site-packages\pymongo\collection.py:489(find_one) 343408 1.842 0.000 274.803 0.001 c:\python27\lib\site-packages\pymongo\cursor.py:699(next) 343408 2.542 0.000 271.212 0.001 c:\python27\lib\site-packages\pymongo\cursor.py:644(_refresh) 343408 4.512 0.000 253.673 0.001 c:\python27\lib\site-packages\pymongo\cursor.py:605(__send_message) 343408 0.971 0.000 242.078 0.001 c:\python27\lib\site-packages\pymongo\connection.py:871(_send_message_with_response) Profile output when using index.intersection: >>> p.sort_stats('cumulative').print_stats(10) Wed Apr 11 16:21:31 2012 ImportDataIntoMongo.profile 41542960 function calls (41542536 primitive calls) in 2889.164 seconds Ordered by: cumulative time List reduced from 778 to 10 due to restriction <10> ncalls tottime percall cumtime percall filename:lineno(function) 1 0.028 0.028 2889.164 2889.164 ImportDataIntoMongo.py:1(<module>) 1 0.017 0.017 2888.679 2888.679 ImportDataIntoMongo.py:202(main) 1 2365.526 2365.526 2365.526 2365.526 {raw_input} 1 0.766 0.766 502.817 502.817 ImportDataIntoMongo.py:180(insert_documents) 343407 9.147 0.000 491.433 0.001 ImportDataIntoMongo.py:152(read_csv_zip) 343406 0.571 0.000 391.394 0.001 c:\python27\lib\site-packages\rtree-0.7.0-py2.7.egg\rtree\index.py:384(intersection) 343406 379.957 0.001 390.824 0.001 c:\python27\lib\site-packages\rtree-0.7.0-py2.7.egg\rtree\index.py:435(_intersection_obj) 686513 22.616 0.000 38.705 0.000 c:\python27\lib\site-packages\rtree-0.7.0-py2.7.egg\rtree\index.py:451(_get_objects) 343406 6.134 0.000 33.326 0.000 ImportDataIntoMongo.py:162(<dictcomp>) 346 0.396 0.001 30.665 0.089 c:\python27\lib\site-packages\pymongo\collection.py:240(insert) EDIT4 I have parallelized the code, but the results are still not very encouraging. I am convinced it could be done better. See my own answer to this question for details.

    Read the article

  • Merging the Executive Committees

    - by Patrick Curran
    As I explained in this blog last year, we use the Process to change the Process. The first of three planned JSRs to modify the way the JCP operates (JSR 348: Towards a new version of the Java Community Process) completed in October 2011. That JSR focused on changes to make our process more transparent and to enable broader participation. The second JSR was inspired by our conviction that Java is One Platform and by our expectation that Java ME and Java SE will become more aligned over time. In anticipation of this change JSR 355: JCP Executive Committee Merge will merge the two Executive Committees into one. The JSR is going very well. We have reached consensus within the Executive Committees, which serve as the Expert Group for process-change JSRs. How we intend to make the transition to a single EC is explained in the revised versions of the Process and EC Standing Rules documents that are currently posted for Early Draft Review. Our intention is to reduce the total number of EC seats but to keep the same ratio (2:1) of ratified and elected seats. Briefly, the plan will be implemented in two stages. The October 2012 elections will be held as usual, but candidates will be informed that they will serve only a one-year term if elected. The two ECs will be merged immediately after this election; at the same time, Oracle's second permanent seat and one of IBM's two ratified seats will be eliminated. The initial merged EC will therefore have 30 members. In the October 2013 elections we will eliminate three more ratified seats and two elected seats, thereby reducing the size of the combined EC to 25 members (16 ratified seats, 8 elected seats, plus Oracle's permanent seat.) All remaining seats, including those of members who were elected in 2012, will be up for re-election in 2013; that election should be particularly interesting. Starting in 2013 we will change from a three-year to a two-year election cycle (half of all EC members will be up for re-election each year.) We believe that these changes will streamline our operations, and position us for a future in which the distinctions between desktop and mobile devices become increasingly blurred. Please take this opportunity to review and comment on our proposed changes - we appreciate your input. Thank you, and onward to JCP.next.3!

    Read the article

  • How can I generate XML application configuration using Zend_Tool?

    - by wimvds
    When creating a new project using zf create project myproject it will create a default project layout with an application.ini in the configs folder. Where can I change these default settings so that it generates (and uses) an XML file (application.xml)? I've looked at the documentation for Zend_Tool (http://framework.zend.com/manual/en/zend.tool.html), but there seems to be no information on how to do this. And suppose you'd like to use a different default folder layout (ie. use htdocs instead of public as your document root), is there a way to specify this as well? Any pointers to relevant information (btw I've looked at the Quickstart, nothing relevant is mentioned there unless I'm overlooking it)? edit I already tried creating a profile (stored in .zf/project/profiles), and used that to create a project (using zf create project myproject myprofile) but that doesn't change anything, even though the .zfproject.xml file in the root of the new project does contain the <applicationConfigFile type="xml"/> setting... The new project contains this (as you can see, it's just the default settings, only the type of applicationConfigFile has been changed) : <?xml version="1.0"?> <projectProfile type="default" version="1.10"> <projectDirectory> <projectProfileFile filesystemName=".zfproject.xml"/> <applicationDirectory classNamePrefix="Application_"> <apisDirectory enabled="false"/> <configsDirectory> <applicationConfigFile type="xml"/> </configsDirectory> <controllersDirectory> <controllerFile controllerName="Index"> <actionMethod actionName="index"/> </controllerFile> <controllerFile controllerName="Error"/> </controllersDirectory> <formsDirectory enabled="false"/> <layoutsDirectory enabled="false"/> <modelsDirectory/> <modulesDirectory enabled="false"/> <viewsDirectory> <viewScriptsDirectory> <viewControllerScriptsDirectory forControllerName="Index"> <viewScriptFile forActionName="index"/> </viewControllerScriptsDirectory> <viewControllerScriptsDirectory forControllerName="Error"> <viewScriptFile forActionName="error"/> </viewControllerScriptsDirectory> </viewScriptsDirectory> <viewHelpersDirectory/> <viewFiltersDirectory enabled="false"/> </viewsDirectory> <bootstrapFile filesystemName="Bootstrap.php"/> </applicationDirectory> <dataDirectory enabled="false"> <cacheDirectory enabled="false"/> <searchIndexesDirectory enabled="false"/> <localesDirectory enabled="false"/> <logsDirectory enabled="false"/> <sessionsDirectory enabled="false"/> <uploadsDirectory enabled="false"/> </dataDirectory> <docsDirectory> <file filesystemName="README.txt"/> </docsDirectory> <libraryDirectory> <zfStandardLibraryDirectory enabled="false"/> </libraryDirectory> <publicDirectory> <publicStylesheetsDirectory enabled="false"/> <publicScriptsDirectory enabled="false"/> <publicImagesDirectory enabled="false"/> <publicIndexFile filesystemName="index.php"/> <htaccessFile filesystemName=".htaccess"/> </publicDirectory> <projectProvidersDirectory enabled="false"/> <temporaryDirectory enabled="false"/> <testsDirectory> <testPHPUnitConfigFile filesystemName="phpunit.xml"/> <testApplicationDirectory> <testApplicationBootstrapFile filesystemName="bootstrap.php"/> </testApplicationDirectory> <testLibraryDirectory> <testLibraryBootstrapFile filesystemName="bootstrap.php"/> </testLibraryDirectory> </testsDirectory> </projectDirectory> </projectProfile>

    Read the article

  • Insert Or update (aka Replace or Upsert)

    - by Davide Mauri
    The topic is really not new but since it’s the second time in few days that I had to explain it different customers, I think it’s worth to make a post out of it. Many times developers would like to insert a new row in a table or, if the row already exists, update it with new data. MySQL has a specific statement for this action, called REPLACE: http://dev.mysql.com/doc/refman/5.0/en/replace.html or the INSERT …. ON DUPLICATE KEY UPDATE option: http://dev.mysql.com/doc/refman/5.0/en/insert-on-duplicate.html With SQL Server you can do the very same using a more standard way, using the MERGE statement, with the support of Row Constructors. Let’s say that you have this table: CREATE TABLE dbo.MyTargetTable (     id INT NOT NULL PRIMARY KEY IDENTITY,     alternate_key VARCHAR(50) UNIQUE,     col_1 INT,     col_2 INT,     col_3 INT,     col_4 INT,     col_5 INT ) GO INSERT [dbo].[MyTargetTable] VALUES ('GUQNH', 10, 100, 1000, 10000, 100000), ('UJAHL', 20, 200, 2000, 20000, 200000), ('YKXVW', 30, 300, 3000, 30000, 300000), ('SXMOJ', 40, 400, 4000, 40000, 400000), ('JTPGM', 50, 500, 5000, 50000, 500000), ('ZITKS', 60, 600, 6000, 60000, 600000), ('GGEYD', 70, 700, 7000, 70000, 700000), ('UFXMS', 80, 800, 8000, 80000, 800000), ('BNGGP', 90, 900, 9000, 90000, 900000), ('AMUKO', 100, 1000, 10000, 100000, 1000000) GO If you want to insert or update a row, you can just do that: MERGE INTO     dbo.MyTargetTable T USING     (SELECT * FROM (VALUES ('ZITKS', 61, 601, 6001, 60001, 600001)) Dummy(alternate_key, col_1, col_2, col_3, col_4, col_5)) S ON     T.alternate_key = S.alternate_key WHEN     NOT MATCHED THEN     INSERT VALUES (alternate_key, col_1, col_2, col_3, col_4, col_5) WHEN     MATCHED AND T.col_1 != S.col_1 THEN     UPDATE SET         T.col_1 = S.col_1,         T.col_2 = S.col_2,         T.col_3 = S.col_3,         T.col_4 = S.col_4,         T.col_5 = S.col_5 ; If you want to insert/update more than one row at once, you can super-charge the idea using Table-Value Parameters, that you can just send from your .NET application. Easy, powerful and effective

    Read the article

  • Using SQL Server's Output Clause

    When you are inserting, updating, or deleting records from a table, SQL Server keeps track of the records that are changed in two different pseudo tables: INSERTED, and DELETED. These tables are normally used in DML triggers. If you use the OUTPUT clause on an INSERT, UPDATE, DELETE or MERGE statement you can expose the records that go to these pseudo tables to your application and/or T-SQL code.

    Read the article

  • JSR Updates

    - by heathervc
    JSR 359, SIP Servlet 2.0, is a new JSR that has been submitted for JSR Review.  The review closes 16 July; the JSR Approval Ballot will be 17-20 July 2012. JSR 355, JCP Executive Committee Merge, has passed the Public Review Ballot and a Proposed Final Draft is now available for review. JSR 340, Java Servlet 3.1 Specification, has posted an Early Draft Review.  The review closes 1 August 2012.

    Read the article

  • How to trace a function array argument in DTrace

    - by uejio
    I still use dtrace just about every day in my job and found that I had to print an argument to a function which was an array of strings.  The array was variable length up to about 10 items.  I'm not sure if the is the right way to do it, but it seems to work and is not too painful if the array size is small.Here's an example.  Suppose in your application, you have the following function, where n is number of item in the array s.void arraytest(int n, char **s){    /* Loop thru s[0] to s[n-1] */}How do you use DTrace to print out the values of s[i] or of s[0] to s[n-1]?  DTrace does not have if-then blocks or for loops, so you can't do something like:    for i=0; i<arg0; i++        trace arg1[i]; It turns out that you can use probe ordering as a kind of iterator. Probes with the same name will fire in the order that they appear in the script, so I can save the value of "n" in the first probe and then use it as part of the predicate of the next probe to determine if the other probe should fire or not.  So the first probe for tracing the arraytest function is:pid$target::arraytest:entry{    self->n = arg0;}Then, if I want to print out the first few items of the array, I first check the value of n.  If it's greater than the index that I want to print out, then I can print that index.  For example, if I want to print out the 3rd element of the array, I would do something like:pid$target::arraytest:entry/self->n > 2/{    printf("%s",stringof(arg1 + 2 * sizeof(pointer)));}Actually, that doesn't quite work because arg1 is a pointer to an array of pointers and needs to be copied twice from the user process space to the kernel space (which is where dtrace is). Also, the sizeof(char *) is 8, but for some reason, I have to use 4 which is the sizeof(uint32_t). (I still don't know how that works.)  So, the script that prints the 3rd element of the array should look like:pid$target::arraytest:entry{    /* first, save the size of the array so that we don't get            invalid address errors when indexing arg1+n. */    self->n = arg0;}pid$target::arraytest:entry/self->n > 2/{    /* print the 3rd element (index = 2) of the second arg. */    i = 2;    size = 4;    self->a_t = copyin(arg1+size*i,size);    printf("%s: a[%d]=%s",probefunc,i,copyinstr(*(uint32_t *)self->a_t));}If your array is large, then it's quite painful since you have to write one probe for every array index.  For example, here's the full script for printing the first 5 elements of the array:#!/usr/sbin/dtrace -spid$target::arraytest:entry{        /* first, save the size of the array so that we don't get           invalid address errors when indexing arg1+n. */        self->n = arg0;}pid$target::arraytest:entry/self->n > 0/{        i = 0;        size = sizeof(uint32_t);        self->a_t = copyin(arg1+size*i,size);        printf("%s: a[%d]=%s",probefunc,i,copyinstr(*(uint32_t *)self->a_t));}pid$target::arraytest:entry/self->n > 1/{        i = 1;        size = sizeof(uint32_t);        self->a_t = copyin(arg1+size*i,size);        printf("%s: a[%d]=%s",probefunc,i,copyinstr(*(uint32_t *)self->a_t));}pid$target::arraytest:entry/self->n > 2/{        i = 2;        size = sizeof(uint32_t);        self->a_t = copyin(arg1+size*i,size);        printf("%s: a[%d]=%s",probefunc,i,copyinstr(*(uint32_t *)self->a_t));}pid$target::arraytest:entry/self->n > 3/{        i = 3;        size = sizeof(uint32_t);        self->a_t = copyin(arg1+size*i,size);        printf("%s: a[%d]=%s",probefunc,i,copyinstr(*(uint32_t *)self->a_t));}pid$target::arraytest:entry/self->n > 4/{        i = 4;        size = sizeof(uint32_t);        self->a_t = copyin(arg1+size*i,size);        printf("%s: a[%d]=%s",probefunc,i,copyinstr(*(uint32_t *)self->a_t));} If the array is large, then your script will also have to be very long to print out all values of the array.

    Read the article

  • Multi threading to execute multiple makefiles

    - by user105205
    The problem is it takes lot of time(20-30 minutes) to build our application. There are around 5000+ files and the code is divided into various subsystems, which can be built independently. Each subsystem has its own makefile. My question is: Is it possible to create a thread for each makefile so that all the subsystems are built in parallel and subsequently run a makefile to merge them all? If it is possible, how to proceed about it?

    Read the article

  • Your mail merging options with Thunderbird

    <b>Worldlabel:</b> "If you use the open source Mozilla Thunderbird email client, you're probably familiar with its powerful address book features: import and export, online status information for your friends, even synchronization. But one thing that's not so obvious is how to do a mail merge to your address book contacts."

    Read the article

  • What tales of horror have you regarding "whitespace" errors?

    - by reechard
    I'm looking for tales of woe such as companies, websites and products failing, religious flamewars, data loss. Examples: text editor settings conflicts indent at 4 tabs at 8 vs. indent at 2 tabs at 4 windows line endings vs. unix line endings, text vs. binary files, source code control related terms: "line feed" "carriage return" "horizontal tab" "mono spacing" "unix line endings" "version control" "diff" "merge" "ftp"

    Read the article

  • MailMergeLib - A .NET Mail Client Library

    MailMergeLib is a SMTP mail client library. It makes use of .NET System.Net.Mail and provides comfortable mail merge capabilities. MailMergeLib corrects a number of the most annoying bugs and RFC violations that .NET 2.0 to .NET 4.0 suffer from.

    Read the article

  • Simulating water droplets on a window

    - by skyuzo
    How do I simulate water droplets realistically falling, gathering, and flowing down a window? For example, see http://www.youtube.com/watch?v=4jaGyv0KRPw. In particular, I want to simulate how smaller droplets merge together to form larger droplets that have enough weight to oppose the surface tension and flow downward, leaving a trail of water. I'm aware of fluid simulation, but how would it be applied in this situation?

    Read the article

  • 13.04 Ringtail USB installation overwrote my username

    - by barnhillec
    I had to use an .iso USB to upgrade from 12.10 to 13.04 Ringtail. Great results, except the USB stick install app asked me to provide an identity so I provided the same one I used for 12.10 . Unfortunately it overwrote my other identity with a fresh one, so now I have no ownership of all my previous profiles, etc. A dumb mistake, but is there a way to take ownership of my previous user identity and merge it with the current?

    Read the article

  • Asp.NET ReportViewer “report execution has expired or cannot be found” error when using session state service or SQL Server session state

    - by dotneteer
    We encountered an error like: ReportServerException: The report execution x5pl2245iwvvq055khsxzlj5 has expired or cannot be found. (rsExecutionNotFound)]    Microsoft.Reporting.WebForms.ServerReportSoapProxy.OnSoapException(SoapException e) +72    Microsoft.Reporting.WebForms.Internal.Soap.ReportingServices2005.Execution.ProxyMethodInvocation.Execute(RSExecutionConnection connection, ProxyMethod`1 initialMethod, ProxyMethod`1 retryMethod) +428    Microsoft.Reporting.WebForms.Internal.Soap.ReportingServices2005.Execution.RSExecutionConnection.GetExecutionInfo() +133    Microsoft.Reporting.WebForms.ServerReport.EnsureExecutionSession() +197    Microsoft.Reporting.WebForms.ServerReport.LoadViewState(Object viewStateObj) +256    Microsoft.Reporting.WebForms.ServerReport..ctor(SerializationInfo info, StreamingContext context) +355 [TargetInvocationException: Exception has been thrown by the target of an invocation.]    System.RuntimeMethodHandle._SerializationInvoke(Object target, SignatureStruct&amp; declaringTypeSig, SerializationInfo info, StreamingContext context) +0    System.Reflection.RuntimeConstructorInfo.SerializationInvoke(Object target, SerializationInfo info, StreamingContext context) +108    System.Runtime.Serialization.ObjectManager.CompleteISerializableObject(Object obj, SerializationInfo info, StreamingContext context) +273    System.Runtime.Serialization.ObjectManager.FixupSpecialObject(ObjectHolder holder) +49    System.Runtime.Serialization.ObjectManager.DoFixups() +223    System.Runtime.Serialization.Formatters.Binary.ObjectReader.Deserialize(HeaderHandler handler, __BinaryParser serParser, Boolean fCheck, Boolean isCrossAppDomain, IMethodCallMessage methodCallMessage) +188    System.Runtime.Serialization.Formatters.Binary.BinaryFormatter.Deserialize(Stream serializationStream, HeaderHandler handler, Boolean fCheck, Boolean isCrossAppDomain, IMethodCallMessage methodCallMessage) +203    System.Web.Util.AltSerialization.ReadValueFromStream(BinaryReader reader) +788    System.Web.SessionState.SessionStateItemCollection.ReadValueFromStreamWithAssert() +55    System.Web.SessionState.SessionStateItemCollection.DeserializeItem(String name, Boolean check) +281    System.Web.SessionState.SessionStateItemCollection.DeserializeItem(Int32 index) +110    System.Web.SessionState.SessionStateItemCollection.get_Item(Int32 index) +17    System.Web.SessionState.HttpSessionStateContainer.get_Item(Int32 index) +13    System.Web.Util.AspCompatApplicationStep.OnPageStartSessionObjects() +71    System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +2065 This error occurs long after the report viewer page has closed. It occurs to any asp.net page in the application, rendering the entire application unusable until the user gets a new session. The cause of the problem is that the ReportViewer uses session state. When a page retrieves session from any out-of-state session, the session variable of type Microsoft.Reporting.WebForms.ReportHierarchy is deserialized from the session storage. The deserialization could cause the object to connect to the report server when the report is no longer available. The solution is simple but not pretty. We need to clean up the session variable when the report viewer page is closed. One way is to add some Javascript to the page to handle the window.onunload event. In the event handler, call a web service to clean up the session variable. The name of the session variable appears to be randomly generated. So we need to loop through the session variable to find a variable of the type Microsoft.Reporting.WebForms.ReportHierarchy. Microsoft has implemented pinging between the report viewer and the report server to keep the report alive on the server when the report viewer is up; I hope they will go one step further to take care of this problem.

    Read the article

  • SQL SERVER – Data Pages in Buffer Pool – Data Stored in Memory Cache

    - by pinaldave
    This will drop all the clean buffers so we will be able to start again from there. Now, run the following script and check the execution plan of the query. Have you ever wondered what types of data are there in your cache? During SQL Server Trainings, I am usually asked if there is any way one can know how much data in a table is stored in the memory cache? The more detailed question I usually get is if there are multiple indexes on table (and used in a query), were the data of the single table stored multiple times in the memory cache or only for a single time? Here is a query you can run to figure out what kind of data is stored in the cache. USE AdventureWorks GO SELECT COUNT(*) AS cached_pages_count, name AS BaseTableName, IndexName, IndexTypeDesc FROM sys.dm_os_buffer_descriptors AS bd INNER JOIN ( SELECT s_obj.name, s_obj.index_id, s_obj.allocation_unit_id, s_obj.OBJECT_ID, i.name IndexName, i.type_desc IndexTypeDesc FROM ( SELECT OBJECT_NAME(OBJECT_ID) AS name, index_id ,allocation_unit_id, OBJECT_ID FROM sys.allocation_units AS au INNER JOIN sys.partitions AS p ON au.container_id = p.hobt_id AND (au.type = 1 OR au.type = 3) UNION ALL SELECT OBJECT_NAME(OBJECT_ID) AS name, index_id, allocation_unit_id, OBJECT_ID FROM sys.allocation_units AS au INNER JOIN sys.partitions AS p ON au.container_id = p.partition_id AND au.type = 2 ) AS s_obj LEFT JOIN sys.indexes i ON i.index_id = s_obj.index_id AND i.OBJECT_ID = s_obj.OBJECT_ID ) AS obj ON bd.allocation_unit_id = obj.allocation_unit_id WHERE database_id = DB_ID() GROUP BY name, index_id, IndexName, IndexTypeDesc ORDER BY cached_pages_count DESC; GO Now let us run the query above and observe the output of the same. We can see in the above query that there are four columns. Cached_Pages_Count lists the pages cached in the memory. BaseTableName lists the original base table from which data pages are cached. IndexName lists the name of the index from which pages are cached. IndexTypeDesc lists the type of index. Now, let us do one more experience here. Please note that you should not run this test on a production server as it can extremely reduce the performance of the database. DBCC DROPCLEANBUFFERS This will drop all the clean buffers and we will be able to start again from there. Now run following script and check the execution plan for the same. USE AdventureWorks GO SELECT UnitPrice, ModifiedDate FROM Sales.SalesOrderDetail WHERE SalesOrderDetailID BETWEEN 1 AND 100 GO The execution plans contain the usage of two different indexes. Now, let us run the script that checks the pages cached in SQL Server. It will give us the following output. It is clear from the Resultset that when more than one index is used, datapages related to both or all of the indexes are stored in Memory Cache separately. Let me know what you think of this article. I had a great pleasure while writing this article because I was able to write on this subject, which I like the most. In the next article, we will exactly see what data are cached and those that are not cached, using a few undocumented commands. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: DMV, Pinal Dave, SQL, SQL Authority, SQL Optimization, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: SQL DMV

    Read the article

  • Drupal Modules for SEO & Content

    - by Aditi
    When we talk about Drupal SEO, there are two things to consider one is about the relevant SEO practices and about appropriate Drupal Modules available. Optimizing your website for search engines is one of the most important aspect of launching & promoting your website especially if ranking matters to you. Understanding SEO For starters, you have begin with Keyword research and then optimize your content according to your findings by tagging, meta tags etc, Drupal modules once installed help you manage a lot of such parameters. Identifying the target keywords Using the Page Title and Token modules PathAuto configuration <H1> heading tags Optimizing Drupal’s default robots.txt file Etc. While Drupal gives you a lot of ability to make your website content worthy & search engine friendly it is important for you to make sure you are not crossing the line or you could get penalized. Modules Overview Drupal Power is at its best when you have these modules & great brain working together. The basic SEO improvements can be achieved easily with the modules enlisted below, but you can win magical rankings if you use them logically & wisely. Understanding your keyword competition & enhancing your content is the basic key to success and ofcourse the modules: Pathauto Automatically create search enging friendly readable URLS from tokens. A token is a piece of data from content, say the author’s username, or the content’s title. For example mysite.com/an-article, rather than mysite.com/node/114 for every node you make. NodeWords Amazingly useful drupal module that allows you to create custom meta tags and descriptions for your nodes, which gives you the ability to target specific keywords and phrases. Page Title Enables you to set an alternative title for the <title></title> tags and for the <h1></h1> tags on a node. Global Redirect Manage content duplication, 301 redirects, and URL validation with this small, but powerful module. Taxonomy manager Make large additions, or changes to taxonomy very easy. This module provides a powerful interface for managing taxonomies. A vocabulary gets displayed in a dynamic tree view, where parent terms can be expanded to list their nested child terms or can be collapsed. robotstxt A robots.txt file is vital for ensuring that search engine spiders don’t index the unwanted areas of your site. This Drupal module gives you the ability to manage your robots.txt file through the CMS admin. xmlsitemap An XML Sitemap lets the search engines index your website content. This module helps in generating and maintaining a complete sitemap for your website and gives you control over exactly which parts of the site you want to be included in the index. It even gives you the ability to automatically submit your sitemap to Google, Yahoo!, Ask.com and Windows Live every time you update a node or at specific interval. Node Import This module allows you to import a set of nodes from a Comma Seperated Values (CSV) or Tab Seperated Values (TSV) text file. Makes it easy to import hundreds-thousands of csv rows and you get to tie up these rows to CCK fields (or locations), and it can file it under the right taxonomy hierarchy. This is Super life saver module.

    Read the article

< Previous Page | 193 194 195 196 197 198 199 200 201 202 203 204  | Next Page >