Search Results

Search found 27890 results on 1116 pages for 'oracle retail documentation team'.

Page 1013/1116 | < Previous Page | 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020  | Next Page >

  • Multiple Jira instances on a single Tomcat 6 server?

    - by davidhayes
    I have a feeling this is a stupid question but I can't find the answer anywhere... I need to deploy 2 Jira instances on asingle Tomcat server, I can't figure out how to pass in the jira.home property The documentation says I need to:- Add a web context property called 'jira.home' — this property is set in different files depending on your application server. For example, for Tomcat (and therefore for JIRA Standalone), you will need to configure the server.xml file. For other application servers you may need to configure the web.xml file, or set 'Context parameter' options on the deployment UI of the application server, etc. Note that If you have specified a JIRA home in jira-application.properties (ie. the recommended method), it will override your web context property. I was hoping something like this would work. <Context jira.home="d:/jira/data" path="" docBase="D:\Jira\atlassian-jira-enterprise-4.1\dist-tomcat\tomcat-6\atlassian-jira-4.1.war" debug="0"> <Resource name="jdbc/JiraDS" auth="Container" type="javax.sql.DataSource" username="sa" password="*****" driverClassName="net.sourceforge.jtds.jdbc.Driver" url="jdbc:jtds:sqlserver://*****:1433/jira41_519;user=****;password=****" /> <Resource name="UserTransaction" auth="Container" type="javax.transaction.UserTransaction" factory="org.objectweb.jotm.UserTransactionFactory" jotm.timeout="60"/> <Manager pathname=""/> </Context> Any ideas??

    Read the article

  • ASP.NET Emails blocked by Spam filter on Exchange

    - by Amadiere
    I'm trying to send an email via some C# ASP.NET code. This is being sent to our internal mailrelay server, with our standard "from" address (e.g. [email protected]). In some instances, this is getting through OK, in others, it's getting blocked by the Spam Filter. An example of our Web.config <mailSettings> <smtp from="[email protected]"> <network host="mailrelay.domain.com" defaultCredentials="true" /> </smtp> </mailSettings> I've spoken with our Exchange Server team and they inform me that on occasions, our mail looks sufficiently like spam and is automatically blocked. The algorithm appears to be points based and blocks on a score of 45. 20 points are instantly added because our system is not sending the hostname with the domain name suffixed. e.g. the server is hoping for myServerName.domain.com, but despite being part of that domain, the server is sending from myServerName. I've been asked to look at altering the EHLO string that is sent and/or influencing the host so that it is its fully qualified name. However, this makes little sense to me, and although I understand the concept of what I need to change - I don't know where to begin looking for the fix.

    Read the article

  • Understanding how rpmbuild works

    - by ereOn
    Hi, For my work, I have to create a documentation on "How-to create a RPM package on Red Hat 5". I'm used to Debian and it's derivative (Ubuntu, and so on) and thus to Debian packages (aka. .deb files). It seems that the RPM logic is quite different from what I know already and I am having some issues understanding the "RPM logic". From what I read, it seems that ones need to be root to create a RPM package. While I understand why root could be required to install a package, I still don't understand why elevated privileges should be needed to just create one. If I try to create a RPM package as a user, changing the buildroot it fails on the %installstep because I don't have permission to write files into /usr/bin. Fair enough but... why does he want to copy my files into /usr/bin at this step ?! I just want to create the package, not install it ! I'm sure I'm missing something here. Is there anyone who could give me at least a basic understanding of how rpmbuild works and why ? Thank you very much !

    Read the article

  • Improving Performance of Crystal Reports using Stored Procedures

    - by mjh41
    Recently I updated a Crystal Report that was doing all of its work on the client-side (Selects, formulas, etc) and changed all of the logic to be done on the server-side through Stored Procedures using an Oracle 11g database. Now the report is only being used to display the output of the stored procedures and nothing else. Everything I have read on this subject says that utilizing stored procedures should greatly reduce the running time of the report, but it still takes roughly the same amount of time to retrieve the data from the server. Is there something wrong with the stored procedure I have written, or is the issue in the Crystal Report itself? Here is the stored procedure code along with the package that defines the necessary REF CURSOR. CREATE OR REPLACE PROCEDURE SP90_INVENTORYDATA_ALL ( invdata_cur IN OUT sftnecm.inv_data_all_pkg.inv_data_all_type, dCurrentEndDate IN vw_METADATA.CASEENTRCVDDATE%type, dCurrentStartDate IN vw_METADATA.CASEENTRCVDDATE%type ) AS BEGIN OPEN invdata_cur FOR SELECT vw_METADATA.CREATIONTIME, vw_METADATA.RESRESOLUTIONDATE, vw_METADATA.CASEENTRCVDDATE, vw_METADATA.CASESTATUS, vw_METADATA.CASENUMBER, (CASE WHEN vw_METADATA.CASEENTRCVDDATE < dCurrentStartDate AND ( (vw_METADATA.CASESTATUS is null OR vw_METADATA.CASESTATUS != 'Closed') OR TO_DATE(vw_METADATA.RESRESOLUTIONDATE, 'MM/DD/YYYY') >= dCurrentStartDate) then 1 else 0 end) InventoryBegin, (CASE WHEN (to_date(vw_METADATA.RESRESOLUTIONDATE, 'MM/DD/YYYY') BETWEEN dCurrentStartDate AND dCurrentEndDate) AND vw_METADATA.RESRESOLUTIONDATE is not null AND vw_METADATA.CASESTATUS is not null then 1 else 0 end) CaseClosed, (CASE WHEN vw_METADATA.CASEENTRCVDDATE BETWEEN dCurrentStartDate AND dCurrentEndDate then 1 else 0 end) CaseCreated FROM vw_METADATA WHERE vw_METADATA.CASEENTRCVDDATE <= dCurrentEndDate ORDER BY vw_METADATA.CREATIONTIME, vw_METADATA.CASESTATUS; END SP90_INVENTORYDATA_ALL; And the package: CREATE OR REPLACE PACKAGE inv_data_all_pkg AS TYPE inv_data_all_type IS REF CURSOR RETURN inv_data_all_temp%ROWTYPE; END inv_data_all_pkg;

    Read the article

  • Best Practices for Setup and Management of an Open Source Project

    - by VirtuosiMedia
    Later this year I want to release a PHP framework that I've been working on as open source. I do use source control (SVN), but it's on an extremely limited basis. I'm self-taught, I develop by myself and don't have the experience of working with large teams. I have some ideas about what can help make a project successful, but I'm fuzzy on some of the details. Since it's not yet released, I want to do everything I can to set up the right infrastructure from the beginning. What do I need to know in order to setup and manage a successful project? Some ideas that I have to make it successful (beyond marketing it): Good documentation and tutorials Automated unit tests and builds to push update to the website A clear roadmap Bug Tracking integrated with the source control A style guide to keep the code consistent along with clear A forum for the community to get support, share ideas, etc. A good example application built with the framework A blog to keep the community informed Maintaining backwards compatibility wherever possible Some of my questions: How do I setup and automate a one step submit-test-commit-generate API docs-push update to website process? How do I handle (technically) submissions from other users? How can I ensure that those submissions must be approved before being integrated? What are some of the pitfalls that can be avoided in terms of the project community? I'd prefer to have it be as friendly and helpful as possible without a lot of drama. I'd love to learn from your experience on any of these points. If you think I'm missing anything big, please share that as well. Any resources (preferably geared toward a beginner) that you could point me towards would also be greatly appreciated.

    Read the article

  • Versioning friendly, extendible binary file format

    - by Bas Bossink
    In the project I'm currently working on there is a need to save a sizable data structure to disk (edit: think dozens of MB's). Being an optimist, I thought that there must be a standard solution for such a problem; however, up to now I haven't found a solution that satisfies the following requirements: .NET 2.0 support, preferably with a FOSS implementation Version friendly (this should be interpreted as: reading an old version of the format should be relatively simple if the changes in the underlying data structure are simple, say adding/dropping fields) Ability to do some form of random access where part of the data can be extended after initial creation (think of this as extending intermediate results) Space and time efficient (XML has been excluded as option given this requirement) Options considered so far: Protocol Buffers: was turned down by verdict of the documentation about Large Data Sets - since this comment suggested adding another layer on top, this would call for additional complexity which I wish to have handled by the file format itself. HDF5,EXI: do not seem to have .net implementations SQLite/SQL Server Compact edition: the data structure at hand would result in a pretty complex table structure that seems too heavyweight for the intended use BSON: does not appear to support requirement 3. Fast Infoset: only seems to have paid .NET implementations. Any recommendations or pointers are greatly appreciated. Furthermore if you believe any of the information above is not true, please provide pointers/examples to prove me wrong.

    Read the article

  • Web service request ignores basic WSDL XML element restrictions

    - by Oliver
    Hi all, I have encountered some difficulties while trying to validate an incoming soap message to a service I have running on JBoss AS (v. 5.1.0). In my code, I have explicitly set some fields to be required, eg: public class MyClass { @XmlElement(required=true, nillable=false) private List<myOtherObjects> myList; } This requirement is also reflected in the WSDL (note the lack of minOccurs="0"): <xs:element maxOccurs="unbounded" name="myList" type="tns:myOtherObjects" /> However, when I do a test soap message that has myList set to empty or null, these restrictions are completely ignored, forcing me to validate manually within the application logic on the service. I did some searching on the Internet and found out that, on WebLogic, the validation does not seem to be enabled by default, though it can be turned on by modifying the weblogic-webservices.xml file. (http://forums.oracle.com/forums/thread.jspa?threadID=783972&tstart=115) I’m wondering if there is something similar that I have to do with JBoss AS to enable automatic validation before the soap message reaches the service. Any help would be greatly appreciated. Oliver

    Read the article

  • Using jQuery to grab the content from CKEditor's iframe

    - by miCRoSCoPiC_eaRthLinG
    Hey guys, I have this custom written CMS that uses CKEditor *(FCKEditor v3) for editing content. Along with that I'm using the jQuery Validation plugin to check all fields for error prior to ajax-based submission. For passing on the data to the PHP backend, I'm using the serialize() function. Problem is, serialize manages to grab all other fields correctly, except for the actual content typed in CKEditor. Like every other WYSIWYG editor, this one too overlays an iframe over an existing textbox. And serialize ignores the iframe and looks only into the textbox for content... which of course, it doesn't find - thus returning a blank content body. My approach to this is to create a hook onto the onchange event of CKEditor and concurrently update the textbox (CKEDITOR.instances.[textboxname].getData() returns the content) or some other hidden field with any changes made in the editor. However, since CKEditor is still in it's beta stage and severely lacks documentation, I cannot seem to find a suitable API call that'll enable me to do so. Does anyone have any idea on how to go about this? Or maybe suggest even a better method?

    Read the article

  • Duration of Excessive GC Time in "java.lang.OutOfMemoryError: GC overhead limit exceeded"

    - by jilles de wit
    Occasionally, somewhere between once every 2 days to once every 2 weeks, my application crashes in a seemingly random location in the code with: java.lang.OutOfMemoryError: GC overhead limit exceeded. If I google this error I come to this SO question and that lead me to this piece of sun documentation which expains: The parallel collector will throw an OutOfMemoryError if too much time is being spent in garbage collection: if more than 98% of the total time is spent in garbage collection and less than 2% of the heap is recovered, an OutOfMemoryError will be thrown. This feature is designed to prevent applications from running for an extended period of time while making little or no progress because the heap is too small. If necessary, this feature can be disabled by adding the option -XX:-UseGCOverheadLimit to the command line. Which tells me that my application is apparently spending 98% of the total time in garbage collection to recover only 2% of the heap. But 98% of what time? 98% of the entire two weeks the application has been running? 98% of the last millisecond? I'm trying to determine a best approach to actually solving this issue rather than just using -XX:-UseGCOverheadLimit but I feel a need to better understand the issue I'm solving.

    Read the article

  • Group testNG tests without annotations

    - by diy
    Hi gents and ladies, I'm responsible for allowing unit tests for one of ETL components.I want to acomplish this using testNG with generic java test class and number of test definitions in testng.xmlpassing various parameters to the class.Oracle and ETL guys should be able to add new tests without changing the java code, so we need to use xml suite file instead of annotations. Question Is there a way to group tests in testng.xml?(similarly to how it is done with annotations) I mean something like <group name="first_group"> <test> <class ...> <parameter ...> </test> </group> <group name="second_group"> <test> <class ...> <parameter ...> </test> </group> I've checked the testng.dtd as figured out that similar syntax is not allowed.But is therea workaround to allow grouping? Thanks in advance

    Read the article

  • Problems with Merb on Snow Leopard

    - by hamhoagie
    I've recently started looking at Merb, for use with some small projects around the office. I'm trying to set up my first project following the docs, and am encountering an exception such as: foo:beta user$ merb Merb root at: /Users/user/code/merb/beta Loading init file from ./config/init.rb Loading ./config/environments/development.rb ~ Connecting to database... ~ Loaded slice 'MerbAuthSlicePassword' ... ~ Parent pid: 39794 ~ Compiling routes... ~ Activating slice 'MerbAuthSlicePassword' ... ~ ~ FATAL: Mongrel is not installed, but you are trying to use it. You need to either install mongrel or a different Ruby web server, like thin. I have installed Mongrel from gem as well as from MacPorts, and am confused by this exception. Significant stats: ruby 1.8.7 (2010-01-10 patchlevel 249) [i686-darwin10] From my installed gems: merb (1.1.0) merb-action-args (1.1.0) merb-assets (1.1.0) merb-auth (1.1.0) merb-auth-core (1.1.0) merb-auth-more (1.1.0) merb-auth-slice-password (1.1.0) merb-cache (1.1.0) merb-core (1.1.0) merb-exceptions (1.1.0) merb-gen (1.1.0) merb-haml (1.1.0) merb-helpers (1.1.0) merb-mailer (1.1.0) merb-param-protection (1.1.0) merb-slices (1.1.0) merb_datamapper (1.1.0) mongrel (1.1.5) Merb documentation is non-existent, so I find myself stuck. Thanks in advance.

    Read the article

  • Coherent access to mainframe files from Win32 application and IBM RDZ/Eclipse?

    - by Ira Baxter
    I have a suite of tools for processing IBM COBOL source code; these tools are built as Win32 applications and talk to Windows (including network) files using traditional Windows file system calls (open, close, read, write) and work just fine, thank you. I'd like to integrate these with Eclipse; we understand how to get Eclipse to do UI for us we think. The problem is that Eclipse/RDZ users access mainframe files through some IBM magic. In How does RDZ access mainframe files I tried to understand how Eclipse accessed files on a mainframe. Apparantly Eclipse/RDZ has a secret filesystem access backdoor not available to normal mortals. At issue is how our tools, reading some Windows-accessible file (local disk file, NFS to mainframe, ...) can associate such files with the files that Eclipse can access or is using? Ideally we'd like UI-integrated versions of our tools take an Eclipse file-name string for a mainframe file, pass it to our Windows application to process, have the Windows application open/read/process the file, and return results associated with that file to the Eclipse UI. Is there a canonical file name path that would be used with mainframe NFS that would be equivalent to the name or access object the Eclipse RDZ used to access the same file? Are all operations doable internally by Eclipse, doable by the mainframe NFS [for instance, can NFS read/update an element in a partitioned data set? Can Eclipse RDZ? Does it matter?] Is the mainframe file access available to custom Java code running under Eclipse RDZ (e.g., equivalents of open/close/read/write based on filename/path/something?) If so, can somebody steer me towards documentation describing the access methods? Anybody else already solve this problem or have a good suggestion?

    Read the article

  • Accessing Item Fields via Sitecore Web Service

    - by Catch
    I am creating items on the fly via Sitecore Web Service. So far I can create the items from this function: AddFromTemplate And I also tried this link: http://blog.hansmelis.be/2012/05/29/sitecore-web-service-pitfalls/ But I am finding it hard to access the fields. So far here is my code: public void CreateItemInSitecore(string getDayGuid, Oracle.DataAccess.Client.OracleDataReader reader) { if (getDayGuid != null) { var sitecoreService = new EverBankCMS.VisualSitecoreService(); var addItem = sitecoreService.AddFromTemplate(getDayGuid, templateIdRTT, "Testing", database, myCred); var getChildren = sitecoreService.GetChildren(getDayGuid, database, myCred); for (int i = 0; i < getChildren.ChildNodes.Count; i++) { if (getChildren.ChildNodes[i].InnerText.ToString() == "Testing") { var getItem = sitecoreService.GetItemFields(getChildren.ChildNodes[i].Attributes[0].Value, "en", "1", true, database, myCred); string p = getChildren.ChildNodes[i].Attributes[0].Value; } } } } So as you can see I am creating an Item and I want to access the Fields for that item. I thought that GetItemFields will give me some value, but finding it hard to get it. Any clue?

    Read the article

  • DB2Command ExecuteNonQuery Insert multiple rows problem

    - by DB2 Nubie
    I'm attempting to insert multiple rows into a DB2 database using C# code like this: string query = "INSERT INTO TESTDB2.RG_Table (V,E,L,N,Q,B,S,P) values" + "('lkjlkj', 'iouoiu', '2009-03-27 12:01:19', 'nnne', 'sdfdf', NULL, NULL, NULL)," + "('lkjlk2', 'iuoiu2', '2009-03-27 12:01:19', 'nnne2', 'sddf2', NULL, NULL, NULL)"; DB2Command cmd = new DB2Command(query, this.transactionConnection, this.transaction); cmd.ExecuteNonQuery(); If I stop building the query string after the first set of values is included it executes without an error. Attempting to load multiple values using this method results in the following error: Upload error : ERROR [42601] [IBM][DB2] SQL0104N An unexpected token "," was found following "". Expected tokens may include: "". SQLSTATE =42601 The SQL syntax matches that which I have read elsewhere, such as http://stackoverflow.com/questions/452859/inserting-multiple-rows-in-a-single-sql-query and IBM's documentation gives this example: cmd = conn.CreateCommand(); cmd.Transaction = trans; cmd.CommandText = "INSERT INTO company_a VALUES(5275, 'Sanders', 20, 'Mgr', 15, 18357.50), " + "(5265, 'Pernal', 20, 'Sales', NULL, 18171.25), " + "(5791, 'O''Brien', 38, 'Sales', 9, 18006.00)"; cmd.ExecuteNonQuery(); Can anyone explain what could account for this?

    Read the article

  • Setting up a "cookieless domain" to improve site performance

    - by Django Reinhardt
    I was reading in Google's documentation about improving site speed. One of their recommendations is serving static content (images, css, js, etc.) from a "cookieless domain": Static content, such as images, JS and CSS files, don't need to be accompanied by cookies, as there is no user interaction with these resources. You can decrease request latency by serving static resources from a domain that doesn't serve cookies. Google then says that the best way to do this is to buy a new domain and set it to point to your current one: To reserve a cookieless domain for serving static content, register a new domain name and configure your DNS database with a CNAME record that points the new domain to your existing domain A record. Configure your web server to serve static resources from the new domain, and do not allow any cookies to be set anywhere on this domain. In your web pages, reference the domain name in the URLs for the static resources. This is pretty straight forward stuff, except for the bit where it says to "configure your web server to serve static resources from the new domain, and do not allow any cookies to be set anywhere on this domain". From what I've read, there's no setting in IIS that allows you to say "serve static resources", so how do I prevent ASP.NET from setting cookies on this new domain? At present, even if I'm just requesting a .jpg from the new domain, it sets a cookie on my browser, even though our application's cookies are set to our old domain. For example, ASP.NET sets an ".ASPXANONYMOUS" cookie that (as far as I'm aware) we're not telling it to do. Apologies if this is a real newb question, I'm new at this! Thanks.

    Read the article

  • Picasa access in android: PicasaUploadActivity

    - by Glyptodon
    I am new to Android, and I'm struggling to figure out exactly what tools are available to me. I am developing for android 2.0.1 for now, just because that is what my device runs. Specifically, I am writing an app that I would like to upload images to a picasa album. I am almost sure this is supported; for example, the built in (google?) photo viewer has a 'share' button with a picasa option, and even a small bit of sample code, including the snippet [borrowed code! apologies if this is against the rules..] temp.setComponent(new ComponentName ("com.google.android.apps.uploader", "com.google.android.apps.uploader.picasa.PicasaUploadActivity")); startActivityForResult(temp, PICASA_INTENT) which looks like exactly what I want. But I can find no documentation anywhere. I am in fact quite unclear how to use this type of resource. From within eclipse, do I need to include another project, com.google.android.apps.uploader? If so, how do I get it? How do I include it? Is there any working sample code provided for me to peer at?

    Read the article

  • Objective-C Protocols within Protocols

    - by LucasTizma
    I recently began trying my hand at using protocols in my Objective-C development as an (obvious) means of delegating tasks more appropriately among my classes. I completely understand the basic notion of protocols and how they work. However, I came across a roadblock when trying to create a custom protocol that in turn implements another protocol. I since discovered the solution, but I am curious why the following DOES NOT work: @protocol STPickerViewDelegate < UIPickerViewDelegate > - ( void )customCallback; @end @interface STPickerView : UIPickerView { id < STPickerViewDelegate > delegate; } @property ( nonatomic, assign ) id < STPickerViewDelegate > delegate; @end Then in a view controller, which conforms to STPickerViewDelegate: STPickerView * pickerView = [ [ STPickerView alloc ] init ]; pickerView.delegate = self; - ( void )customCallback { ... } - ( NSString * )pickerView:( UIPickerView * )pickerView titleForRow:( NSInteger )row forComponent:( NSInteger )component { ... } The problem was that pickerView:titleForRow:forComponent: was never being called. On the other hand, customCallback was being called just fine, which isn't too surprising. I don't understand why STPickerViewDelegate, which itself conforms to UIPickerViewDelegate, does not notify my view controller when events from UIPickerViewDelegate are supposed to occur. Per my understanding of Apple's documentation, if a protocol (A) itself conforms to another protocol (B), then a class (C) that conforms to the first protocol (A) must also conform to the second protocol (B), which is exactly the behavior I want and expected. What I ended up doing was removing the id< STPickerViewDelegate > delegate property from STViewPicker and instead doing something like the following in my STViewPicker implementation where I want to evoke customCallback: if ( [ self.delegate respondsToSelector:@selector( customCallback ) ] ) { [ self.delegate performSelector:@selector( customCallback ) ]; } This works just fine, but I really am puzzled as to why my original approach did not work.

    Read the article

  • Nhibernate: distinct results in second level Collection

    - by Miguel Marques
    I have an object model like this: class EntityA { ... IList<EntityB> BList; ... } class EntityB { ... IList<EntityC> CList; } I have to fetch all the colelctions (Blist in EntityA and CList in EntityB), because if they all will be needed to make some operations, if i don't eager load them i will have the select n+1 problem. So the query was this: select a from EntityA a left join fetch a.BList b left join fetch b.CList c The fist problem i faced with this query, was the return of duplicates from the DB, i had EntityA duplicates, because of the left join fetch with BList. A quick read through the hibernate documentation and there were some solutions, first i tried the distinct keyword that supposelly wouldn't replicate the SQL distinct keyword except in some cases, maybe this was one of those cases because i had a SQL error saying that i cannot select distict text columns (column [Observations] in EntityA table). So i used one of the other solutions: query.SetResultTransformer(new DistinctRootEntityResultTransformer()); This worked fine. But the result of the operations were still not passing the tests. I checked further and i found out that now there were duplicates of EntityB, because of the left join fetch with CList. The question is, how can i use the distinct in a second level collection? I searched and i only find solutions for the root entity's direct child collection, but never for the second level child collections... Thank you for your time

    Read the article

  • Eclipse / Aptana File Sync Solutions

    - by Brad
    Our development team uses Eclipse + Aptana to do their web development work. Currently, most of them are mapping their Eclipse projects directly to the web server. I'd rather them create a local project and use that to sync to the web server project directory they are working on. The issue is that there aren't any good solutions which is just appalling given the popularity of the two. The FileSync plugin for Eclipse is only one-way. Meaning if another developer makes a change to the file on the server, another dev isn't even notified and could overwrite the change. The File Transfer option in Aptana 2.0 doesn't support any sort of Sync, just manually uploading/downloading files. The Sync option in Aptana 1.5.1 doesn't allow you to merge files when they are different. You can only update one or the other. It does however allow you to view a diff (but only if you right click and select) and in that diff you can't make any changes. I did find a way to allow files to be uploaded to their Sync repositories in Aptana using Eclipse Monkey. However it doesn't work if a user saves multiple files at once, 'Save All', again it doesn't work. And additionally, there is no notification if a user opens a local file that has an updated copy on the server. I tried to add one using Eclipse Monkey but I couldn't find any sort of listener in the Eclipse API to do it and any Eclipse Monkey documentation is far and few between. My only solution at this point is just to let them continue to map directly to the server or ask them to do a manual download before they do any work (but again what if someone uploads a change right after they do that). Anyone have any ideas?

    Read the article

  • Zendx JQuery Autocomplete

    - by emeraldjava
    I've been trying to get the Zend Jquery autocomplete function working, when i noticed this section in the Zend documentation. The following UI widgets are available as form view helpers. Make sure you use the correct version of jQuery UI library to be able to use them. The Google CDN only offers jQuery UI up to version 1.5.2. Some other components are only available from jQuery UI SVN, since they have been removed from the announced 1.6 release. autoComplete($id, $value, $params, $attribs): The AutoComplete View helper will be included in a future jQuery UI version (currently only via jQuery SVN) and creates a text field and registeres it to have auto complete functionality. The completion data source has to be given as jQuery related parameters 'url' or 'data' as described in the jQuery UI manual. Does anybody know which svn url tag or branch i need to download to get a javascript file with the autocomplete functions available in it? At the moment, my Bootstrap.php has $view->addHelperPath('ZendX/JQuery/View/Helper/', 'ZendX_JQuery_View_Helper'); $view->jQuery()->enable(); $view->jQuery()->uiEnable(); Zend_Controller_Action_HelperBroker::addHelper( new ZendX_JQuery_Controller_Action_Helper_AutoComplete() ); // Add it to the ViewRenderer $viewRenderer = new Zend_Controller_Action_Helper_ViewRenderer(); $viewRenderer->setView($view); Zend_Controller_Action_HelperBroker::addHelper($viewRenderer); In my layout, i define the jquery ui version i want <?php echo $this->jQuery() ->setUiVersion('1.7.2');?> Finally my index.phtml has the autocomplete widget <p><?php $data = array('New York', 'Tokyo', 'Berlin', 'London', 'Sydney', 'Bern', 'Boston', 'Baltimore'); ?> <?php echo $this->autocomplete("ac1", "", array('data' => $data));?></p> I'm using Zend 1.8.3 atm.

    Read the article

  • How to draw line inside a scatter plot

    - by ruffy
    I can't believe that this is so complicated but I tried and googled for a while now. I just want to analyse my scatter plot with a few graphical features. For starters, I want to add simply a line. So, I have a few (4) points and like in this plot [1] I want to add a line to it. http://en.wikipedia.org/wiki/File:ROC_space-2.png [1] Now, this won't work. And frankly, the documentation-examples-gallery combo and content of matplotlib is a bad source for information. My code is based upon a simple scatter plot from the gallery: # definitions for the axes left, width = 0.1, 0.85 #0.65 bottom, height = 0.1, 0.85 #0.65 bottom_h = left_h = left+width+0.02 rect_scatter = [left, bottom, width, height] # start with a rectangular Figure fig = plt.figure(1, figsize=(8,8)) axScatter = plt.axes(rect_scatter) # the scatter plot: p1 = axScatter.scatter(x[0], y[0], c='blue', s = 70) p2 = axScatter.scatter(x[1], y[1], c='green', s = 70) p3 = axScatter.scatter(x[2], y[2], c='red', s = 70) p4 = axScatter.scatter(x[3], y[3], c='yellow', s = 70) p5 = axScatter.plot([1,2,3], "r--") plt.legend([p1, p2, p3, p4, p5], [names[0], names[1], names[2], names[3], "Random guess"], loc = 2) # now determine nice limits by hand: binwidth = 0.25 xymax = np.max( [np.max(np.fabs(x)), np.max(np.fabs(y))] ) lim = ( int(xymax/binwidth) + 1) * binwidth axScatter.set_xlim( (-lim, lim) ) axScatter.set_ylim( (-lim, lim) ) xText = axScatter.set_xlabel('FPR / Specificity') yText = axScatter.set_ylabel('TPR / Sensitivity') bins = np.arange(-lim, lim + binwidth, binwidth) plt.show() Everything works, except the p5 which is a line. Now how is this supposed to work? What's good practice here?

    Read the article

  • Fluent config not generating mapping files

    - by rboarman
    Hello, I am trying to get Fluent nHibernate to generate mappings so I can take a look at the files and the sql. My code is based on this post and on what I can glean from the documentation. http://stackoverflow.com/questions/1375146/fluent-mapping-entities-and-classmaps-in-different-assemblies I am using the latest code from git. Here’s my config code: Configuration cfg = new Configuration(); var ft = Fluently.Configure(cfg); //DbConnection by fluent ft.Database ( MsSqlConfiguration .MsSql2008 .ConnectionString("……") .ShowSql() .UseReflectionOptimizer() ); //get mapping files. ft.Mappings(m => { //set up the mapping locations m.FluentMappings.AddFromAssemblyOf<Entity>() .ExportTo(@"C:\temp"); m.Apply(cfg); }); I also tried: var sessionFactory = Fluently.Configure() .Database(MsSqlConfiguration .MsSql2008 .ShowSql() .ConnectionString(“……")) .Mappings(p => p.FluentMappings .AddFromAssemblyOf<Entity>() .ExportTo(@"c:\temp\")) .BuildSessionFactory(); I have verified that the connection string is correct. The issue is that no mapping files show up in the ExportTo folder and no sql code shows up in the output window or in the log file. No errors or exceptions are generated either. I have no idea where to go from here. Thank you in advance. Rick

    Read the article

  • Should all public methods of an API be documented?

    - by cynicalman
    When writing "library" type classes, is it better practice to always write markup documentation (i.e. javadoc) in java or assume that the code can be "self-documenting"? For example, given the following method stub: /** * Copies all readable bytes from the provided input stream to the provided output * stream. The output stream will be flushed, but neither stream will be closed. * * @param inStream an InputStream from which to read bytes. * @param outStream an OutputStream to which to copy the read bytes. * @throws IOException if there are any errors reading or writing. */ public void copyStream(InputStream inStream, OutputStream outStream) throws IOException { // copy the stream } The javadoc seems to be self-evident, and noise that just needs to be updated if the funcion is changed at all. But the sentence about flushing and not closing the stream could be valuable. So, when writing a library, is it best to: a) always document b) document anything that isn't obvious c) never document (code should speak for itself!) I usually use b), myself (since the code can be self-documenting otherwise)...

    Read the article

  • Maintaining a Python web application: heavier vs lighter framework?

    - by Tiberiu Ana
    Five+ years from now, you are hired to support and extend a data-centric web application written in Python that hasn't been kept up to date. Would you rather prefer it was written in the current version of Django/Pylons at the time, using the available standard components, or kept minimal with something like CherryPy/web.py and a few library dependencies? Heavy framework Advantages: standard approach to application design and structure, as encouraged by framework; less application code to worry about. Disadvantages: requires learning the framework to understand how things work; broken things in old version of framework difficult to fix; upgrading to new version potentially difficult due to changing APIs; finding relevant documentation/help potentially difficult due to changing APIs. Light framework Advantages: most application code is directly "visible"; only needed features are implemented; architecture should be simpler to understand; less need to upgrade external dependencies; easier to upgrade external dependencies. Disadvantages: some reinventing the wheel; non-standard design and structure (with the associated unique issues and bugs). I will update the list with any helpful answers.

    Read the article

  • Eclipse PDE - Plug-in, Feature, and Product Versioning

    - by Michael
    I am having much confusion over the process of upgrading version numbers in dependent plug-ins, features, and products in a fairly large eclipse workspace. I have made API changes to java code residing in an existing plug-in and thus requires an increase of the Major part of the version identifier. This plug-in serves as a dependency to a given feature, where the feature is later included in a product. From the documentation at http://wiki.eclipse.org/Version_Numbering, I understand (for the most part) when the proper number should be increased on the containing plug-in itself. However, how would this Major version number change on the plug-in affect dependent, "down-the-line" items (e.g., features, products)? For example, assume we have the typical "Hello World" setup as follows: Plug-in: com.example.helloworld, version 1.0.0 Feature: com.example.helloworld.feature, version 1.0.0 Product: com.example.helloworld.product, version 1.0.0 If I were to make an API change in the plug-in, this would require a version update to be that of 2.0.0. What would then be the version of the feature, 1.1.0? The same question can be applied for the product level as well (e.g., if the feature is 1.1.0 OR 2.0.0, what is the product version number)? I'm sure this is quite the newbie question so I apologize for wasting anyone's time and effort. I have searched for this type of content but all I am finding is are examples showing how to develop a plug-in, feature, product, and update site for the first time. The only other content related to my search has been developing feature patches and have not touched on the versioning aspect as much as I would prefer. I am having difficulty coming into (for the first time) an Eclipse RCP / PDE environment and need to learn the proper way and / or best practices for making such versioning updates and how to best reflect this throughout other dependent projects in the workspace.

    Read the article

< Previous Page | 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020  | Next Page >