Search Results

Search found 23207 results on 929 pages for 'node form'.

Page 162/929 | < Previous Page | 158 159 160 161 162 163 164 165 166 167 168 169  | Next Page >

  • Big Data – Buzz Words: What is MapReduce – Day 7 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned what is Hadoop. In this article we will take a quick look at one of the four most important buzz words which goes around Big Data – MapReduce. What is MapReduce? MapReduce was designed by Google as a programming model for processing large data sets with a parallel, distributed algorithm on a cluster. Though, MapReduce was originally Google proprietary technology, it has been quite a generalized term in the recent time. MapReduce comprises a Map() and Reduce() procedures. Procedure Map() performance filtering and sorting operation on data where as procedure Reduce() performs a summary operation of the data. This model is based on modified concepts of the map and reduce functions commonly available in functional programing. The library where procedure Map() and Reduce() belongs is written in many different languages. The most popular free implementation of MapReduce is Apache Hadoop which we will explore tomorrow. Advantages of MapReduce Procedures The MapReduce Framework usually contains distributed servers and it runs various tasks in parallel to each other. There are various components which manages the communications between various nodes of the data and provides the high availability and fault tolerance. Programs written in MapReduce functional styles are automatically parallelized and executed on commodity machines. The MapReduce Framework takes care of the details of partitioning the data and executing the processes on distributed server on run time. During this process if there is any disaster the framework provides high availability and other available modes take care of the responsibility of the failed node. As you can clearly see more this entire MapReduce Frameworks provides much more than just Map() and Reduce() procedures; it provides scalability and fault tolerance as well. A typical implementation of the MapReduce Framework processes many petabytes of data and thousands of the processing machines. How do MapReduce Framework Works? A typical MapReduce Framework contains petabytes of the data and thousands of the nodes. Here is the basic explanation of the MapReduce Procedures which uses this massive commodity of the servers. Map() Procedure There is always a master node in this infrastructure which takes an input. Right after taking input master node divides it into smaller sub-inputs or sub-problems. These sub-problems are distributed to worker nodes. A worker node later processes them and does necessary analysis. Once the worker node completes the process with this sub-problem it returns it back to master node. Reduce() Procedure All the worker nodes return the answer to the sub-problem assigned to them to master node. The master node collects the answer and once again aggregate that in the form of the answer to the original big problem which was assigned master node. The MapReduce Framework does the above Map () and Reduce () procedure in the parallel and independent to each other. All the Map() procedures can run parallel to each other and once each worker node had completed their task they can send it back to master code to compile it with a single answer. This particular procedure can be very effective when it is implemented on a very large amount of data (Big Data). The MapReduce Framework has five different steps: Preparing Map() Input Executing User Provided Map() Code Shuffle Map Output to Reduce Processor Executing User Provided Reduce Code Producing the Final Output Here is the Dataflow of MapReduce Framework: Input Reader Map Function Partition Function Compare Function Reduce Function Output Writer In a future blog post of this 31 day series we will explore various components of MapReduce in Detail. MapReduce in a Single Statement MapReduce is equivalent to SELECT and GROUP BY of a relational database for a very large database. Tomorrow In tomorrow’s blog post we will discuss Buzz Word – HDFS. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Anyone have experience with Silicon Mechanics 4-Node Machines?

    - by Matt Simmons
    I'm taking a look at buying some new servers (small infrastructure, 2 racks, etc), and although I like a lot of the features in blades, I'm looking at the price point for Silicon Mechanics' 4-node machines. http://www.siliconmechanics.com/i27091/xeon-2U-4-Node.php It's a bit like a mini-blade enclosure, but has no shared resources, except for the redundant power supplies. A single point of management would be great, but for the low price point here, I'm possibly willing to give that up, if the server quality is adequate. Basically, have you used these machines? Any problems? Anything you like?

    Read the article

  • How to access an node of an LAN via WAN?

    - by gilzero
    Lets say I have a router that is connected to the Internet. An WAN IP address is given by ISP. It is using PPPoE ADSL, the IP address is not static, every time it connected, assigned a different IP address. There is an web server 192.168.0.100 running in the LAN. I heard something like DMZ + DynDNS can do the job? But not sure what these are and how they work. Any there way(s) to access the local node 192.168.0.100 via WAN, so that I can reach that node even I am not home. Thanks for any advice.

    Read the article

  • How to extend a file definition from an existing module in the node?

    - by c33s
    I use an older version of the example42 mysql module, which defines the mysql.conf file but not its content. Mmy goal is to just include the mysql module and add a content definition in the node. class mysql { ... file { "mysql.conf": path => "${mysql::params::configfile}", mode => "${mysql::params::configfile_mode}", owner => "${mysql::params::configfile_owner}", group => "${mysql::params::configfile_group}", ensure => present, require => Package["mysql"], notify => Service["mysql"], } ... } node xyz { include mysql File["mysql.conf"] { content => template("mymodule/mysql.conf.erb")} } The above code produces a "Only subclasses can override parameters" What is the correct way to just add a content definition to an existing file definition?

    Read the article

  • How can I hide the SiteMapPath root node on home page?

    - by Jamie Ide
    How can I hide the root node in a SiteMapPath control when the user is on the root node page? For example, my breadcrumb trail on a child page is: Home Products Hammers Ball Peen which is fine. But when the user is on the Home page, the SiteMapPath control displays Home which is useless clutter. I want to suppress displaying Home (the root node) when the user is on the home page. I have the SiteMapPath control in a master page. Also, I'm handling SiteMapResolve to set the querystrings in the nodes.

    Read the article

  • How to select all parents of a node in a hierarchical mysql table?

    - by Ehsan Khodarahmi
    I have a MySQL table that represents data for a tree GUI component, here's the structure of my table: treeTable ( id INT NOT NULL PRIMARY KEY, parentId INT, name VARCHAR(255) ); parentId is a self-referencing foreign key. Now I want to write a stored procedure which gets a node id and returns a result set that contains that node and all of its parents. For example, suppose that my table has filled with this data: 1, null, 'root' 2, 1 , 'level_1' 3, 2 , 'level_2' Now I want to get all parent nodes of node 3 (nodes 1 and 2) and return a result set that contains all tree records. Can anybody help me please?

    Read the article

  • How to convert a DOM node list to an array in Javascript?

    - by Guss
    I have a Javascript function that accepts a list of HTML nodes, but it expects a Javascript array (it runs some Array methods on that) and I want to feed it the output of Document.getElementsByTagName that returns a DOM node list. Initially I thought of using something simple like: Array.prototype.slice.call(list,0) And that works fine in all browsers, except of course Internet Explorer which returns the error "JScript object expected", as apparently the DOM node list returned by Document.getElement* methods is not a JScript object enough to be the target of a function call. Caveats: I don't mind writing Internet Explorer specific code, but I'm not allowed to use any Javascript libraries such as JQuery because I'm writing a widget to be embedded into 3rd party web site, and I cannot load external libraries that will create conflict for the clients. My last ditch effort is to iterate over the DOM node list and create an array myself, but is there a nicer way to do that?

    Read the article

  • How to Validate an XML Node against an XSD in C++?

    - by Ashish
    Hi Please note that I'm asking for validation against a particular node and not the whole file. For examples <somexmldoc> <someNode> <UserDefinedNode> </> <UserDefinedNode> </> </someNode> </somexmldoc> For this XML doc, I have an wholeDoc.XSD which could be used to validate the whole document except "UserDefinedNode" (This node is specified with "any" tag in xsd, which allows a user to define anything under that node). Is it possible to have a separate userdefined.XSD file to validate "UserDefinedNode"? Is it possible to use MSXML for C++ (IXMLDomDocument) to validate this? Thanks!

    Read the article

  • On a Hudson master node, what are the .tmp files created in the workspace-files folder?

    - by Patrick Johnmeyer
    Question: In the path HUDSON_HOME/jobs/<jobname>/builds/<timestamp>/workspace-files, there are a series of .tmp files. What are these files, and what feature of Hudson do they support? Background Using Hudson version 1.341, we have a continuous build task that runs on a slave instance. After the build is otherwise complete, including archiving the artifacts, task scanner, etc., the job appears to hang for a long period of time. In monitoring the master node, I noted that many .tmp files were being created and modified under builds//workspace=files, and that some of them were very large. This appears to be causing the delay, as the job completed at the same time that files in this path stopped changing. Some key configuration points of the job: It is tied to a specific slave node It builds in a 'custom workspace' It triggers a downstream job that builds in the same custom workspace on the same slave node

    Read the article

  • How do I execute an action in drupal after each time a node is saved?

    - by ford
    I'm developing an Action in Drupal which is supposed to activate after saving a node, exporting content to XML (which includes data from the node that was just saved), using the "Trigger: After saving an updated post" trigger. Unfortunately this action actually happens right before the information from the recently saved post is saved to the database. ie. when looking at the XML later, I find that the most recent change I made was not included. Saving after editing a different node will restore the previously missing data. How can I get my action to fire after the saving process is complete?

    Read the article

  • How to map a set of text as a whole to a node?

    - by JIpeng Tan
    Suppose I have a plain text file with the following data: DataSetOne <br /> content <br /> content <br /> content <br /> DataSetTwo <br /> content <br /> content <br /> content <br /> content <br /> ...and so on... What I want to to is: count how many contents in each data set. For example the result should be <DataSetOne, 3>, <DataSetTwo, 4> I am a beginer to hadoop, I wonder if there is a way to map a chunk of data as a whole to a node. for example, set all DataSetOne to node 1 and all DataSetTwo to node 2. Does anyone can give me an idea how to archive this?

    Read the article

  • Jenkins to not allow the same job to run concurrently on the same node?

    - by Marek Gimza
    I have 4 nodes and 2 jobs. Any node can run 2 jobs concurrently and any job can be executed concurrently. I want to be able to restrict running the same job concurrently on the same machine. For example: Jobs: J1 and J2 nodes: N1,N2,N3 and N4 I can run J1 and J2 on the same node at the same time. I can run J1 on N1 and N3 at the same time. BUT I do not want to run J1 and another build of J1 on the same node at the same time. I have tried "Locks and Latches", "Jenkins Exclusive Execution", "Exclusion Plugin" plugins, and these will work well when trying to coordinate different jobs. But my case is trying to manage different build-instances of the same job.

    Read the article

  • Unexpected start of already-primary server processes when heartbeat on secondary is stopped.

    - by vorik
    Hi, I've got an active-passive Heartbeat cluster with Apache, MySQL, ActiveMQ and DRBD. Today, I wanted to perform hardware-maintenance on the secondary node (node04), so I stopped the heartbeat service before shutting it down. Then, the primary node (node03) received a shutdown notice from the secondary node (node04). This logging comes from the primary node: node03 heartbeat[4458]: 2010/03/08_08:52:56 info: Received shutdown notice from 'node04.companydomain.nl'. heartbeat[4458]: 2010/03/08_08:52:56 info: Resources being acquired from node04.companydomain.nl. harc[27522]: 2010/03/08_08:52:56 info: Running /etc/ha.d/rc.d/status status heartbeat[27523]: 2010/03/08_08:52:56 info: Local Resource acquisition completed. mach_down[27567]: 2010/03/08_08:52:56 info: /usr/share/heartbeat/mach_down: nice_failback: foreign resources acquired mach_down[27567]: 2010/03/08_08:52:56 info: mach_down takeover complete for node node04.companydomain.nl. heartbeat[4458]: 2010/03/08_08:52:56 info: mach_down takeover complete. harc[27620]: 2010/03/08_08:52:56 info: Running /etc/ha.d/rc.d/ip-request-resp ip-request-resp ip-request-resp[27620]: 2010/03/08_08:52:56 received ip-request-resp drbddisk OK yes ResourceManager[27645]: 2010/03/08_08:52:56 info: Acquiring resource group: node03.companydomain.nl drbddisk Filesystem::/dev/drbd0::/data::ext3 mysql apache::/etc/httpd/conf/httpd.conf LVSSyncDaemonSwap::master monitor activemq tivoli-cluster MailTo::[email protected]::DRBDFailureDrisAcc MailTo::[email protected]::DRBDFailureDrisAcc 1.2.3.212 ResourceManager[27645]: 2010/03/08_08:52:56 info: Running /etc/ha.d/resource.d/drbddisk start Filesystem[27700]: 2010/03/08_08:52:57 INFO: Running OK ResourceManager[27645]: 2010/03/08_08:52:57 info: Running /etc/ha.d/resource.d/mysql start mysql[27783]: 2010/03/08_08:52:57 Starting MySQL[ OK ] apache[27853]: 2010/03/08_08:52:57 INFO: Running OK ResourceManager[27645]: 2010/03/08_08:52:57 info: Running /etc/ha.d/resource.d/monitor start monitor[28160]: 2010/03/08_08:52:58 ResourceManager[27645]: 2010/03/08_08:52:58 info: Running /etc/ha.d/resource.d/activemq start activemq[28210]: 2010/03/08_08:52:58 Starting ActiveMQ Broker... ActiveMQ Broker is already running. ResourceManager[27645]: 2010/03/08_08:52:58 ERROR: Return code 1 from /etc/ha.d/resource.d/activemq ResourceManager[27645]: 2010/03/08_08:52:58 CRIT: Giving up resources due to failure of activemq ResourceManager[27645]: 2010/03/08_08:52:58 info: Releasing resource group: node03.companydomain.nl drbddisk Filesystem::/dev/drbd0::/data::ext3 mysql apache::/etc/httpd/conf/httpd.conf LVSSyncDaemonSwap::master monitor activemq tivoli-cluster MailTo::[email protected]::DRBDFailureDrisAcc MailTo::[email protected]::DRBDFailureDrisAcc 1.2.3.212 ResourceManager[27645]: 2010/03/08_08:52:58 info: Running /etc/ha.d/resource.d/IPaddr 1.2.3.212 stop IPaddr[28329]: 2010/03/08_08:52:58 INFO: ifconfig eth0:0 down IPaddr[28312]: 2010/03/08_08:52:58 INFO: Success ResourceManager[27645]: 2010/03/08_08:52:58 info: Running /etc/ha.d/resource.d/MailTo [email protected] DRBDFailureDrisAcc stop MailTo[28378]: 2010/03/08_08:52:58 INFO: Success ResourceManager[27645]: 2010/03/08_08:52:58 info: Running /etc/ha.d/resource.d/MailTo [email protected] DRBDFailureDrisAcc stop MailTo[28433]: 2010/03/08_08:52:58 INFO: Success ResourceManager[27645]: 2010/03/08_08:52:58 info: Running /etc/ha.d/resource.d/tivoli-cluster stop ResourceManager[27645]: 2010/03/08_08:52:58 info: Running /etc/ha.d/resource.d/activemq stop activemq[28503]: 2010/03/08_08:53:01 Stopping ActiveMQ Broker... Stopped ActiveMQ Broker. ResourceManager[27645]: 2010/03/08_08:53:01 info: Running /etc/ha.d/resource.d/monitor stop monitor[28681]: 2010/03/08_08:53:01 ResourceManager[27645]: 2010/03/08_08:53:01 info: Running /etc/ha.d/resource.d/LVSSyncDaemonSwap master stop LVSSyncDaemonSwap[28714]: 2010/03/08_08:53:02 info: ipvs_syncmaster down LVSSyncDaemonSwap[28714]: 2010/03/08_08:53:02 info: ipvs_syncbackup up LVSSyncDaemonSwap[28714]: 2010/03/08_08:53:02 info: ipvs_syncmaster released ResourceManager[27645]: 2010/03/08_08:53:02 info: Running /etc/ha.d/resource.d/apache /etc/httpd/conf/httpd.conf stop apache[28782]: 2010/03/08_08:53:03 INFO: Killing apache PID 18390 apache[28782]: 2010/03/08_08:53:03 INFO: apache stopped. apache[28771]: 2010/03/08_08:53:03 INFO: Success ResourceManager[27645]: 2010/03/08_08:53:03 info: Running /etc/ha.d/resource.d/mysql stop mysql[28851]: 2010/03/08_08:53:24 Shutting down MySQL.....................[ OK ] ResourceManager[27645]: 2010/03/08_08:53:24 info: Running /etc/ha.d/resource.d/Filesystem /dev/drbd0 /data ext3 stop Filesystem[29010]: 2010/03/08_08:53:25 INFO: Running stop for /dev/drbd0 on /data Filesystem[29010]: 2010/03/08_08:53:25 INFO: Trying to unmount /data Filesystem[29010]: 2010/03/08_08:53:25 ERROR: Couldn't unmount /data; trying cleanup with SIGTERM Filesystem[29010]: 2010/03/08_08:53:25 INFO: Some processes on /data were signalled Filesystem[29010]: 2010/03/08_08:53:27 INFO: unmounted /data successfully Filesystem[28999]: 2010/03/08_08:53:27 INFO: Success ResourceManager[27645]: 2010/03/08_08:53:27 info: Running /etc/ha.d/resource.d/drbddisk stop heartbeat[4458]: 2010/03/08_08:53:29 WARN: node node04.companydomain.nl: is dead heartbeat[4458]: 2010/03/08_08:53:29 info: Dead node node04.companydomain.nl gave up resources. heartbeat[4458]: 2010/03/08_08:53:29 info: Link node04.companydomain.nl:eth0 dead. heartbeat[4458]: 2010/03/08_08:53:29 info: Link node04.companydomain.nl:eth1 dead. hb_standby[29193]: 2010/03/08_08:53:57 Going standby [foreign]. heartbeat[4458]: 2010/03/08_08:53:57 info: node03.companydomain.nl wants to go standby [foreign] Soo... What just happened here??? Heartbeat on node04 stopped and told node03, which was the active node at the time. Somehow, node03 decided to start the cluster processes that were already running. (For the processes that are not critical, I always return a 0 from the startupscript so it does not stops the entire cluster when a non-essential part fails.) When starting ActiveMQ, it returns status 1 because it is already running. This fails the node and shuts everything down. As heartbeat is not running on the secondary node, it cannot failover to there. When I tried to run ha_takeover to restart the resources, absolutely nothing happened. Only after I restarted heartbeat on the primary node the resources could be started (after a delay of 2 minutes). These are my questions: Why does heartbeat on the primary node try to start the cluster processes again? Why did ha_takeover not work? What can I do to prevent this from happening? Server configuration: DRBD: version: 8.3.7 (api:88/proto:86-91) GIT-hash: ea9e28dbff98e331a62bcbcc63a6135808fe2917 build by [email protected], 2010-01-20 09:14:48 0: cs:Connected ro:Secondary/Primary ds:UpToDate/UpToDate B r---- ns:0 nr:6459432 dw:6459432 dr:0 al:0 bm:301 lo:0 pe:0 ua:0 ap:0 ep:1 wo:d oos:0 uname -a Linux node04 2.6.18-164.11.1.el5 #1 SMP Wed Jan 6 13:26:04 EST 2010 x86_64 x86_64 x86_64 GNU/Linux haresources node03.companydomain.nl \ drbddisk \ Filesystem::/dev/drbd0::/data::ext3 \ mysql \ apache::/etc/httpd/conf/httpd.conf \ LVSSyncDaemonSwap::master \ monitor \ activemq \ tivoli-cluster \ MailTo::[email protected]::DRBDFailureDrisAcc \ MailTo::[email protected]::DRBDFailureDrisAcc \ 1.2.3.212 ha.cf debugfile /var/log/ha-debug logfile /var/log/ha-log keepalive 500ms deadtime 30 warntime 10 initdead 120 udpport 694 mcast eth0 225.0.0.3 694 1 0 mcast eth1 225.0.0.4 694 1 0 auto_failback off node node03.companydomain.nl node node04.companydomain.nl respawn hacluster /usr/lib64/heartbeat/dopd apiauth dopd gid=haclient uid=hacluster Thank you very much in advance, Ger Apeldoorn

    Read the article

  • During npm install socket.io I get error 127, node-waf command not found. How to solve it?

    - by SandyWeb
    I'm trying to install socket.io on centos 5 with node.js package manager. During installation I got an error: "make: node-waf: Command not found" and "This is most likely a problem with the ws package" # npm install socket.io npm http GET https://registry.npmjs.org/socket.io npm http 304 https://registry.npmjs.org/socket.io npm http GET https://registry.npmjs.org/policyfile/0.0.4 npm http GET https://registry.npmjs.org/redis/0.6.7 npm http GET https://registry.npmjs.org/socket.io-client/0.9.2 npm http 304 https://registry.npmjs.org/policyfile/0.0.4 npm http 304 https://registry.npmjs.org/socket.io-client/0.9.2 npm http 304 https://registry.npmjs.org/redis/0.6.7 npm http GET https://registry.npmjs.org/uglify-js/1.2.5 npm http GET https://registry.npmjs.org/ws npm http GET https://registry.npmjs.org/xmlhttprequest/1.2.2 npm http GET https://registry.npmjs.org/active-x-obfuscator/0.0.1 npm http 304 https://registry.npmjs.org/xmlhttprequest/1.2.2 npm http 304 https://registry.npmjs.org/uglify-js/1.2.5 npm http 304 https://registry.npmjs.org/ws npm http 304 https://registry.npmjs.org/active-x-obfuscator/0.0.1 npm http GET https://registry.npmjs.org/zeparser/0.0.5 > [email protected] preinstall /root/node_modules/socket.io/node_modules/socket.io-client/node_modules/ws > make **node-waf configure build make: node-waf: Command not found make: *** [all] Error 127** npm ERR! [email protected] preinstall: `make` npm ERR! `sh "-c" "make"` failed with 2 npm ERR! npm ERR! Failed at the [email protected] preinstall script. npm ERR! This is most likely a problem with the ws package, npm ERR! not with npm itself. npm ERR! Tell the author that this fails on your system: npm ERR! make npm ERR! You can get their info via: npm ERR! npm owner ls ws npm ERR! There is likely additional logging output above. npm ERR! npm ERR! System Linux 2.6.18-194.17.4.el5 npm ERR! command "node" "/usr/bin/npm" "install" "socket.io" npm ERR! cwd /root npm ERR! node -v v0.6.13 npm ERR! npm -v 1.1.10 npm ERR! code ELIFECYCLE npm ERR! message [email protected] preinstall: `make` npm ERR! message `sh "-c" "make"` failed with 2 npm ERR! errno {} npm ERR! npm ERR! Additional logging details can be found in: npm ERR! /root/npm-debug.log npm not ok What is "node-waf" and how can I solve this problem? Thanks!

    Read the article

  • ajaxSubmit options success & error functions aren't fired

    - by Thommy Tomka
    jQuery 1.7.2 jQuery Validate 1.1.0 jQuery Form 3.18 Wordpress 3.4.2 I am trying to code a contact/ mail form in above environment/ with above jQuery libs. Now I am having a problem with the jQuery Form JS: I have taken the original code from the developers page for ajaxSubmit and only altered the target option to an ID which exists in my HTML source and replaced $ with jQuery in function showRequest. The problem is, that the function namend after success: does not fire. I tried the same with error: and again nothing fired. Only complete: did and the function I placed there alerted the responseText from the receiving script. Does anyone has an idea whats going wrong? Thanks in advance! Thomas jQuery(document).ready(function() { var options = { target: '#mail-status', // target element(s) to be updated with server response beforeSubmit: showRequest, // pre-submit callback success: showResponse, // post-submit callback // other available options: //url: url // override for form's 'action' attribute //type: type // 'get' or 'post', override for form's 'method' attribute //dataType: null // 'xml', 'script', or 'json' (expected server response type) //clearForm: true // clear all form fields after successful submit //resetForm: true // reset the form after successful submit // $.ajax options can be used here too, for example: //timeout: 3000 }; jQuery("#mailform").validate( { submitHandler: function(form) { jQuery(form).ajaxSubmit(options); }, errorPlacement: function(error, element) { }, rules: { author: { minlength: 2, required: true }, email: { required: true, email: true }, comment: { minlength: 2, required: true } }, highlight: function(element) { jQuery(element).addClass("e"); jQuery(element.form).find("label[for=" + element.id + "]").addClass("e"); }, unhighlight: function(element) { jQuery(element).removeClass("e"); jQuery(element.form).find("label[for=" + element.id + "]").removeClass("e"); } }); }); // pre-submit callback function showRequest(formData, jqForm, options) { // formData is an array; here we use $.param to convert it to a string to display it // but the form plugin does this for you automatically when it submits the data var queryString = jQuery.param(formData); // jqForm is a jQuery object encapsulating the form element. To access the // DOM element for the form do this: // var formElement = jqForm[0]; alert('About to submit: \n\n' + queryString); // here we could return false to prevent the form from being submitted; // returning anything other than false will allow the form submit to continue return true; } // post-submit callback function showResponse(responseText, statusText, xhr, $form) { // for normal html responses, the first argument to the success callback // is the XMLHttpRequest object's responseText property // if the ajaxSubmit method was passed an Options Object with the dataType // property set to 'xml' then the first argument to the success callback // is the XMLHttpRequest object's responseXML property // if the ajaxSubmit method was passed an Options Object with the dataType // property set to 'json' then the first argument to the success callback // is the json data object returned by the server alert('status: ' + statusText + '\n\nresponseText: \n' + responseText + '\n\nThe output div should have already been updated with the responseText.'); }

    Read the article

  • Understanding the Customer Form in Release 12 from an AR Perspective!!

    - by user793553
    Confused by the Customer Form in Release 12??  Read on, to get some insight into the evolution of this screen, and how it links in with Trading Community Architecture. Historically, the customer data model was owned by Oracle Receivables (AR).  However, as the data model changed and more complex relationships and attributes had to be tracked and monitored, the Trading Community Architecture (TCA) product was created.  All applications within the E-Business suite that require interaction with a customer integrate with TCA. Customer information is no longer stored in the individual applications but rather in a central repository/registry maintained within TCA.  It is important to understand the following entities/concepts stored in TCA: Party: A party is an entity with whom you can have a potential business relationship.  A party can be either a Person or an Organization.  The Party entity is completely independent of any business relationship; this means that a Party can exist even if you have no transactions with it.   The Party is the "umbrella" entity under which you capture all other attributes listed below. Customer: A customer is a party with whom you have an existing business relationship.  From an AR perspective, you can simplify the concepts by thinking of a Customer as a Party. This definition however does not apply to all other applications. In the Oracle Receivables Customer form, the information displayed at the Customer level is from TCA's Party information record. Customer Account (also called Account): An account contains information about how you transact business with a particular customer.  You can create multiple accounts for a customer.  When you create invoices and receipts you associate it to a particular Account of a Customer. Location: A Location is an address.  It is a point in space, typically identified by a street number, a street name, a city, a state or province, a country.  A location is independent of what it is used for - you do not associate a purpose to a location. Party Site: A Party Site is associated to a Party.  It is the location where a party is physically located.  When defining sites for a Party, only one can be an identifying address.  However, you can define other party sites associated to a party. You can define purposes/usage for Party Sites. Account Site: An Account Site is associated to a Customer Account. It is the location associated to the account you are transacting business with. You can define business purposes (also called site uses) for an Account site. Read more about the Customer Workbench in these notes: Doc ID 1436547.1 Oracle Receivables: Understanding the Customer Form in Release 12 Doc ID  1437866.1 Customer Form - Address: Troubleshooting, Known Issues and Patches Doc ID  1448442.1 Oracle Receivables (AR): Customer Workbench Information Center Do you find this type of blog entry useful?  Please add comments to let us know how we can help you more effectively.  Thank you!

    Read the article

  • Customizing Django form widgets? - Django

    - by RadiantHex
    Hi folks, I'm having a little problem here! I have discovered the following as being the globally accepted method for customizing Django admin field. from django import forms from django.utils.safestring import mark_safe class AdminImageWidget(forms.FileInput): """ A ImageField Widget for admin that shows a thumbnail. """ def __init__(self, attrs={}): super(AdminImageWidget, self).__init__(attrs) def render(self, name, value, attrs=None): output = [] if value and hasattr(value, "url"): output.append(('<a target="_blank" href="%s">' '<img src="%s" style="height: 28px;" /></a> ' % (value.url, value.url))) output.append(super(AdminImageWidget, self).render(name, value, attrs)) return mark_safe(u''.join(output)) I need to have access to other field of the model in order to decide how to display the field! For example: If I am keeping track of a value, let us call it "sales". If I wish to customize how sales is displayed depending on another field, let us call it "conversion rate". I have no obvious way of accessing the conversion rate field when overriding the sales widget! Any ideas to work around this would be highly appreciated! Thanks :)

    Read the article

  • php form-input validation

    - by fusion
    i have a html page in which i enter data which then submits and inserts in a database on a php page. how would i validate in php that the data received is not a duplicate of the data in the database? any help appreciated.

    Read the article

  • GWT simple form validation example

    - by nablik
    Hi, I'm looking for nice and fast way for validating forms in GWT, that can display errors one by one, focusing on the offending field. I've found gwt-validator and gwt-validation, but their documentation lack of examples. Thanks for help

    Read the article

  • php, curl , php curl , multipart/form-data , upload picture redirect

    - by Michael
    I'm trying to upload some pictures using php cURL on a classified ad website .I think that I set all the parameters properly but I see that there is a kind of redirect after I post the picture . The issue is that the url where I'm getting redirected gives 404 error instead to return the html that it does when I make the post with a normal browser . here is the php code that I have so far " $URL = "http://api.classistatic.com/api/image/upload"; $s = "PAD001"; $v = "2"; $n = "k"; $a = "1:a126581b8150ddc1337cabce28f2feb53849fd143bd6e42649f90175c0e023e3"; $u = "@/var/www/html/artwork/tmp/!BszBLV!EGk~$(KGrHqEOKicEvMi8HVg(BL5ZbWvs0g~~_1.JPG"; $htmlContent = $baseClass-processPicturerequest($URL, $s, $v, $b, $n, $a, $u); The log from server is as following : http://pastebin.com/gZqPgsFX

    Read the article

  • Unable to retrieve information form HP-UX pst_status object

    - by bogertron
    I am attempting to get process information by using the HP-UX C/C++ library. After scouring the internet, I have discovered that HP-UX has the pstat.h header file which allows programmers to retrieve the process information. After attempting to understand code from the internet hp website, I attempted to create a little test sample to comprehend what the code does. I attempted to use example 3, however, I ran into several issues. The first issue came when I attempted to execute the following line of code: (void)printf("pid is %d, command is %s\n", pst[i].pst_pid, pst[i].pst_ucomm); When I attempted to print the string, I hit a memory fault. So I decided to attempt to see what the string is and came up with the following: #include <sys/param.h> #include <sys/pstat.h> #include <sys/unistd.h> #include <string.h> int main(int argc, char** argv) { #define BURST ((size_t)10) struct pst_status pst[BURST]; int i, count; int idx = 0; /* index within the context */ int index = 0; /* loop until count == 0, will occur all have been returned */ while ((count=pstat_getproc(pst, sizeof(pst[0]),BURST,idx))>0) { index = 0; printf("index: %d", index); /* got count (max of BURST) this time. process them */ while (pst[i].pst_ucomm[index] != '\0') { printf("%c", pst[i].pst_ucomm[index]); index++; } printf("\n"); for (i = 0; i < count; i++) { printf("pid is %d, command is \n", pst[i].pst_pid); } /* * now go back and do it again, using the next index after * the current 'burst' */ idx = pst[count-1].pst_idx + 1; } if (count == -1) perror("pstat_getproc()"); #undef BURST } Unfortunately, what happens is that I get the first process printed, then pid is 2, command is pid is 2, command is pid is 2, command is... I know that I must be doing something foolish since my C/C++ skills are not that great, but I cannot figure out what the issue is since the code is largely copied from the hp website. So here's the question(s) for clarity: 1. Why can't printf("%s", pst[i].pst_ucomm); handle strings? 2. Why can't I iterate over the processes in the system? Any help is greatly appreciated.

    Read the article

  • Dealing with "Coder's Block" (or blank form syndrome)

    - by robsoft
    I know this is the sort of somewhat open-ended question that we're discouraged from asking, but there are lots of open-ended questions around already, and this is something quite relevant to me right now. Do you ever get those times when you're about to start work on a new function/feature of an established system, and you get "coder's block"?. It's like a mental freeze at the sight of a large, completely unpopulated dialog, or an empty code file with just the stub reference headers etc. Do you ever have that 'ulp' moment that seems to sap all your momentum and leave you wide open to distractions (surfing the web for inspiration, checking out 'crackoverflow' etc)? Not that I'd wish it on anyone, but hopefully some of you do, and hopefully some of you can suggest tips or strategies for overcoming the situation, regaining your momentum and becoming productive again. I usually try to reduce what I'm about to do down to absurdly small steps, in the hope that as the job becomes just a series of 'doh' tasks, I'll kickstart myself into working through them. However sometimes, particularly when a deadline is looming, I'll get overwhelmed by this approach as I realise I probably don't have enough time to do all of those tiny steps properly. Those are the darkest moments, (often literally) just before dawn! This situation can be particularly crippling if you mostly work alone, too. Any thoughts or suggestions? Any methods that you found helpful yourself?

    Read the article

  • Cannot pull correct data from a Javascript array into an HTML form

    - by Isaac
    I am trying to return the description value of the corresponding author name and book title(that are typed in the text boxes). The problem is that the first description displays in the text area no matter what. <h1>Bookland</h1> <div id="bookinfo"> Author name: <input type="text" id="authorname" name="authorname"></input><br /> Book Title: <input type="text" id="booktitle" name="booktitle"></input><br /> <input type="button" value="Find book" id="find"></input> <input type="button" value="Clear Info" id="clear"></input><br /> <textarea rows="15" cols="30" id="destin"></textarea> </div> JavaScript: var bookarray = [{Author: "Thomas Mann", Title: "Death in Venice", Description: "One of the most famous literary works of the twentieth century, this novella embodies" + "themes that preoccupied Thomas Mann in much of his work:" + "the duality of art and life, the presence of death and disintegration in the midst of existence," + "the connection between love and suffering and the conflict between the artist and his inner self." }, {Author: "James Joyce", Title: "A portrait of the artist as a young man", Description: "This work displays an unusually perceptive view of British society in the early 20th century." + "It is a social comedy set in Florence, Italy, and Surrey, England." + "Its heroine, Lucy Honeychurch, struggling against straitlaced Victorian attitudes of arrogance, narroe mindedness and sobbery, falls in love - while on holiday in Italy - with the socially unsuitable George Emerson." }, {Author: "E. M. Forster", Title: "A room with a view", Description: "This book is a fictional re-creation of the Irish writer'sown life and early environment." + "The experiences of the novel's young hero,unfold in astonishingly vivid scenes that seem freshly recalled from life" + "and provide a powerful portrait of the coming of age of a young man ofunusual intelligence, sensitivity and character. " }, {Author: "Isabel Allende", Title: "The house of spirits", Description: "Allende describes the life of three generations of a prominent family in Chile and skillfully combines with this all the main historical events of the time, up until Pinochet's dictatorship." }, {Author: "Isabel Allende", Title: "Of love and shadows", Description: "The whole world of Irene Beltran, a young reporter in Chile at the time of the dictatorship, is destroyed when" + "she discovers a series of killings carried out by government soldiers." + "With the help of a photographer, Francisco Leal, and risking her life, she tries to come up with evidence against the dictatorship." }] function searchbook(){ for(i=0; i &lt; bookarray.length; i++){ if ((document.getElementById("authorname").value &amp; document.getElementById("booktitle").value ) == (bookarray[i].Author &amp; bookarray[i].Title)){ document.getElementById("destin").value =bookarray[i].Description return bookarray[i].Description } else { return "Not Found!" } } } document.getElementById("find").addEventListener("click", searchbook, false)

    Read the article

< Previous Page | 158 159 160 161 162 163 164 165 166 167 168 169  | Next Page >