Search Results

Search found 17480 results on 700 pages for 'dynamic variable'.

Page 227/700 | < Previous Page | 223 224 225 226 227 228 229 230 231 232 233 234  | Next Page >

  • Inspecting the model in a Rails application

    - by Matt Sherman
    I am learning some Ruby on Rails, and am a newbie. Most of my background is in ASP.net MVC on the back end. As I play with a basic scaffold project, I wonder about this case: you jump into an established Rails project and want to get to know the model. Based on what I have seen so far (again, simple scaffold), the properties for a given class are not immediately revealed. I don't see property accessors on the model classes. I do understand that this is because of the dynamic nature of Ruby and such things are not necessary or even perhaps desirable. Convention over code, I get that. (Am familiar with dynamic concepts, mostly via JS.) But if I am somewhere off in a view, and want to quickly know whether the (eg) Person object has a MiddleName property, how would I find that out? I don't have to go into the migrations, do I?

    Read the article

  • Application Composer: Exposing Your Customizations in BI Analytics and Reporting

    - by Richard Bingham
    Introduction This article explains in simple terms how to ensure the customizations and extensions you have made to your Fusion Applications are available for use in reporting and analytics. It also includes four embedded demo videos from our YouTube channel (if they don't appear check the browser address bar for a blocking shield icon). If you are new to Business Intelligence consider first reviewing our getting started article, and you can read more about the topic of custom subject areas in the documentation book Extending Sales. There are essentially four sections to this post. First we look at how custom fields added to standard objects are made available for reporting. Secondly we look at creating custom subject areas on the standard objects. Next we consider reporting on custom objects, starting with simple standalone objects, then child custom objects, and finally custom objects with relationships. Finally this article reviews how flexfields are exposed for reporting. Whilst this article applies to both Cloud/SaaS and on-premises deployments, if you are an on-premises developer then you can also use the BI Administration Tool to customize your BI metadata repository (the RPD) and create new subject areas. Whilst this is not covered here you can read more in Chapter 8 of the Extensibility Guide for Developers. Custom Fields on Standard Objects If you add a custom field to your standard object then it's likely you'll want to include it in your reports. This is very simple, since all new fields are instantly available in the "[objectName] Extension" folder in existing subject areas. The following two minute video demonstrates this. Custom Subject Areas for Standard Objects You can create your own subject areas for use in analytics and reporting via Application Composer. An example use-case could be to simplify the seeded subject areas, since they sometimes contain complex data fields and internal values that could confuse business users. One thing to note is that you cannot create subject areas in a sandbox, as it is not supported by BI, so once your custom object is tested and complete you'll need to publish the sandbox before moving forwards. The subject area creation processes is essentially two-fold. Once the request is submitted the ADF artifacts are generated, then secondly the related metadata is sent to the BI presentation server API's to make the updates there. One thing to note is that this second step may take up to ten minutes to complete. Once finished the status of the custom subject area request should show as 'OK' and it is then ready for use. Within the creation processes wizard-like steps there are three concepts worth highlighting: Date Flattening - this feature permits the roll up of reports at various date levels, such as data by week, month, quarter, or year. You simply check the box to enable it for that date field. Measures - these are your own functions that you can build into the custom subject area. They are related to the field data type and include min-max for dates, and sum(), avg(), and count() for  numeric fields. Implicit Facts - used to make the BI metadata join between your object fields and the calculated measure fields. The advice is to choose the most frequently used measure to ensure consistency. This video shows a simple example, where a simplified subject area is created for the customer 'Contact' standard object, picking just a few fields upon which users can then create reports. Custom Objects Custom subject areas support three types of custom objects. First is a simple standalone custom object and for which the same process mentioned above applies. The next is a custom child object created on a standard object parent, and finally a custom object that is related to a parent object - usually through a dynamic choice list. Whilst the steps in each of these last two are mostly the same, there are differences in the way you choose the objects and their fields. This is illustrated in the videos below.The first video shows the process for creating a custom subject area for a simple standalone custom object. This second video demonstrates how to create custom subject areas for custom objects that are of parent:child type, as well as those those with dynamic-choice-list relationships. &lt;span id=&quot;XinhaEditingPostion&quot;&gt;&lt;/span&gt; Flexfields Dynamic and Extensible Flexfields satisfy a similar requirement as custom fields (for Application Composer), with flexfields common across the Fusion Financials, Supply Chain and Procurement, and HCM applications. The basic principle is when you enable and configure your flexfields, in the edit page under each segment region (for both global and context segments) there is a BI Enabled check box. Once this is checked and you've completed your configuration, you run the Scheduled Process job named 'Import Oracle Fusion Data Extensions for Transactional Business Intelligence' to generate and migrate the related BI artifacts and data. This applies for dynamic, key, and extensible flexfields. Of course there is more to consider in terms of how you wish your flexfields to be implemented and exposed in your reports, and details are given in Chapter 4 of the Extending Applications guide.

    Read the article

  • DDNS Not Creating Journal (Dhcpd and Named)

    - by user130094
    * EDIT 1 * After monkeying with additional debug logging I see some log entries of interest. 27-Jul-2012 23:45:26.537 general: error: zone example.lan/IN/internal: journal rollforward failed: no more 27-Jul-2012 23:45:26.537 general: error: zone example.lan/IN/internal: not loaded due to errors. ^^^ If I can remedy the above messages I think I'll be good to go ^^^ * EDIT 2 * Grasping at straws I touched a forward and a reverse zone journal file and restarted named. Boom! Works. Despite documentation stating the files are created automatically and what I have seen before... dunno why but that did the trick. Also re-checked perms on the dir the files live in. As certain as I was, they were correct with named having rw. CentOS 6 (final) dhcpd 4.1.1-P1 named BIND 9.8.2rc1-RedHat-9.8.2-0.10.rc1.el6 Basic DHCP and DNS functionality are in place on 192.168.111.2. Clients are assigned addresses as intended and can resolve local DNS names as well as Internet names. My problem is that named's zone journal files are not created. chroot: /var/named/chroot I tried placing the zone files in various directories (/var/named/data, /var/named, /var/named/dynamic - no matter which dir with named owning and wide open perms I now get nowhere). Along the way I, at one point, got a permission denied when named tried to create the journal. Resolved the issue by: chown --recursive named:named /var/named chmod --recursive 777 /var/named The journal was then created and here's where things fell apart. I attempted to tame permissions to something more sane and broke it. Once changed and having restarted named it threw an error indicating the journal was out of sync (or something to that affect)... didn't matter since this is a new setup so I deleted it and now it is not recreated. Now though I see no errors in /var/log/messages, my chrooted /var/log/named.log, or chrooted /var/log/named.debug. I increased the debug level with 'rndc trace' - no love. Increased trace to 10, still nothing. SELinux is disabled... [root@server temp]# sestatus SELinux status: disabled dhcpd.conf... allow client-updates; ddns-update-style interim; subnet 192.168.111.0 netmask 255.255.255.224 { ... key dhcpudpate { algorithm hmac-md5; secret LDJMdPdEZED+/nN/AGO9ZA==; } zone example.lan. { primary 192.168.111.2; key dhcpudpate; } } named.conf... key dhcpudpate { algorithm hmac-md5; secret "LDJMdPdEZED+/nN/AGO9ZA=="; }; zone "example.lan" { type master; file "/var/named/dynamic/example.lan.db"; allow-transfer { none; }; allow-update { key dhcpudpate; }; notify false; check-names ignore; }; The following shows /var/log/named.log output of named starting up - no errors. 27-Jul-2012 21:33:39.349 general: info: zone 111.168.192.in-addr.arpa/IN/internal: loaded serial 2012072601 27-Jul-2012 21:33:39.349 general: info: zone example.lan/IN/internal: loaded serial 2012072501 27-Jul-2012 21:33:39.350 general: info: zone example2.lan/IN/internal: loaded serial 2012072501 27-Jul-2012 21:33:39.350 general: info: zone example3.lan/IN/internal: loaded serial 2012072601 27-Jul-2012 21:33:39.350 general: info: zone example4.lan/IN/internal: loaded serial 2012072501 27-Jul-2012 21:33:39.351 general: info: zone example5.lan/IN/internal: loaded serial 2012072501 27-Jul-2012 21:33:39.351 general: info: managed-keys-zone ./IN/internal: loaded serial 0 27-Jul-2012 21:33:39.351 general: info: zone example.lan/IN/external: loaded serial 2012072501 27-Jul-2012 21:33:39.352 general: info: zone example1.lan/IN/external: loaded serial 2012072501 27-Jul-2012 21:33:39.352 general: info: zone example2.lan/IN/external: loaded serial 2012072501 27-Jul-2012 21:33:39.352 general: info: zone example3.lan/IN/external: loaded serial 2012072501 27-Jul-2012 21:33:39.353 general: info: managed-keys-zone ./IN/external: loaded serial 0 27-Jul-2012 21:33:39.353 general: notice: running 27-Jul-2012 21:34:03.825 general: info: received control channel command 'trace 10' 27-Jul-2012 21:34:03.825 general: info: debug level is now 10 ...and /var/log/messages for a named start... Jul 27 23:02:04 server named[9124]: ---------------------------------------------------- Jul 27 23:02:04 server named[9124]: BIND 9 is maintained by Internet Systems Consortium, Jul 27 23:02:04 server named[9124]: Inc. (ISC), a non-profit 501(c)(3) public-benefit Jul 27 23:02:04 server named[9124]: corporation. Support and training for BIND 9 are Jul 27 23:02:04 server named[9124]: available at https://www.isc.org/support Jul 27 23:02:04 server named[9124]: ---------------------------------------------------- Jul 27 23:02:04 server named[9124]: adjusted limit on open files from 4096 to 1048576 Jul 27 23:02:04 server named[9124]: found 2 CPUs, using 2 worker threads Jul 27 23:02:04 server named[9124]: using up to 4096 sockets Jul 27 23:02:04 server named[9124]: loading configuration from '/etc/named.conf' Jul 27 23:02:04 server named[9124]: using default UDP/IPv4 port range: [1024, 65535] Jul 27 23:02:04 server named[9124]: using default UDP/IPv6 port range: [1024, 65535] Jul 27 23:02:04 server named[9124]: listening on IPv4 interface eth0, 192.168.111.2#53 Jul 27 23:02:04 server named[9124]: generating session key for dynamic DNS Jul 27 23:02:04 server named[9124]: sizing zone task pool based on 12 zones Jul 27 23:02:04 server named[9124]: set up managed keys zone for view internal, file 'dynamic/3bed2cb3a3acf7b6a8ef408420cc682d5520e26976d354254f528c965612054f.mkeys' Jul 27 23:02:04 server named[9124]: set up managed keys zone for view external, file 'dynamic/3c4623849a49a53911c4a3e48d8cead8a1858960bccdea7a1b978d73ec2f06d7.mkeys' Jul 27 23:02:04 server named[9124]: command channel listening on 127.0.0.1#953 What can I do to troubleshoot this further? It almost seems as though dhcpd is not triggering the update. Maybe I should troubleshoot here and, if so, how? Many thanks.

    Read the article

  • Develop an ASP.NET Website using WebMatrix

    The following article explains how to install and develop a website using WebMatrix and add ASP.NET web pages to the website. One of the positive features of websites developed with WebMatrix is that the ASP.NET Helper Library and Razor Syntax can be used to provide enhanced features and dynamic content to the site. Razor Syntax is a simple and effective programming language that works well on the WebMatrix platform. As a result, a brief introduction to ASP.NET helper dynamic content and Razor Syntax is provided at the end of this article along with resources to assist in web development using Razor Syntax.

    Read the article

  • Can coding style cause or influence memory fragmentation?

    - by Robert Dailey
    As the title states, I'd like to know if coding style can cause or influence memory fragmentation in a native application, specifically one written using C++. If it does, I'd like to know how. An example of what I mean by coding style is using std::string to represent strings (even static strings) and perform operations on them instead of using the C Library (such as strcmp, strlen, and so on) which can work both on dynamic strings and static strings (the latter point is beneficial since it does not require an additional allocation to access string functions, which is not the case with std::string). A "forward-looking" attitude I have with C++ is to not use the CRT, since to do so would, in a way, be a step backwards. However, such a style results in more dynamic allocations, and especially for a long living application like a server, this causes some speculation that memory fragmentation might become a problem.

    Read the article

  • Infinite terrain shadows

    - by user35399
    I'm creating an infinite terrain engine, which generates the terrain either with fractals or noise. How can I make dynamic shadows for the sun on this terrain, if I don't know in advance what will be rendered in front of the sun. My terrain: The sun is the only light, it is directional, my terrain is generated on a plane which is positioned before the camera, frustum culled and fits the size of the viewing frustum. It is height mapped with generated noise texture, and using tessellation shaders on it. Video:http://www.youtube.com/watch?v=tk6yFwYusOs Dynamic shadows with the infinite terrain.

    Read the article

  • What options should I consider for a modern Web/Mobile development stack? [on hold]

    - by jimmy_terra
    I'm a long time server side dev who has been tasked with building a bleeding edge web UI (go figure), so apologies for the very broad nature of the question. What are the best modern libraries, tools, languages and patterns for building a dynamic web application that will run seamlessly on mobiles also? My requirements are that it must be dynamic (push updates), support automated testing, and should allow 'componentization' (a team of devs will be working on this). What should I check out and why? I will start off with some of the things I'm looking at already: Front-end HTML5 CSS3 JavaScript AngularJs Testing Karma Testem Jasmine Patterns Single Page Applications

    Read the article

  • Inserting Data into a Microsoft SQL 2008 Database in ASP.NET 3.5

    In the previous article Creating an ASP.NET Dynamic Web Page using a MS SQL Server 2 8 Database GridView Display you learned how to create a dynamic web page that can let the user edit and delete database records directly using a web browser. It was demonstrated with a home renovation project where team leaders can update and delete project tasks online. However it does not include features that let users add or insert new records directly into the database using a web browser. This feature will be covered in this tutorial.... Cloud Servers in Demand - GoGrid Start Small and Grow with Your Business. $0.10/hour

    Read the article

  • Java java.util.ConcurrentModificationException error

    - by vijay
    Hi all, please can anybody help me solve this problem last so many days I could not able to solve this error. I tried using synchronized method and other ways but did not work so please help me Error java.util.ConcurrentModificationException at java.util.AbstractList$Itr.checkForComodification(Unknown Source) at java.util.AbstractList$Itr.remove(Unknown Source) at JCA.startAnalysis(JCA.java:103) at PrgMain2.doPost(PrgMain2.java:235) Code public synchronized void startAnalysis() { //set Starting centroid positions - Start of Step 1 setInitialCentroids(); Iterator<DataPoint> n = mDataPoints.iterator(); //assign DataPoint to clusters loop1: while (true) { for (Cluster c : clusters) { c.addDataPoint(n.next()); if (!n.hasNext()) break loop1; } } //calculate E for all the clusters calcSWCSS(); //recalculate Cluster centroids - Start of Step 2 for (Cluster c : clusters) { c.getCentroid().calcCentroid(); } //recalculate E for all the clusters calcSWCSS(); // List copy = new ArrayList(originalList); //synchronized (c) { for (int i = 0; i < miter; i++) { //enter the loop for cluster 1 for (Cluster c : clusters) { for (Iterator<DataPoint> k = c.getDataPoints().iterator(); k.hasNext(); ) { // synchronized (k) { DataPoint dp = k.next(); System.out.println("Value of DP" +dp); //pick the first element of the first cluster //get the current Euclidean distance double tempEuDt = dp.getCurrentEuDt(); Cluster tempCluster = null; boolean matchFoundFlag = false; //call testEuclidean distance for all clusters for (Cluster d : clusters) { //if testEuclidean < currentEuclidean then if (tempEuDt > dp.testEuclideanDistance(d.getCentroid())) { tempEuDt = dp.testEuclideanDistance(d.getCentroid()); tempCluster = d; matchFoundFlag = true; } //if statement - Check whether the Last EuDt is > Present EuDt } //for variable 'd' - Looping between different Clusters for matching a Data Point. //add DataPoint to the cluster and calcSWCSS if (matchFoundFlag) { tempCluster.addDataPoint(dp); //k.notify(); // if(k.hasNext()) k.remove(); for (Cluster d : clusters) { d.getCentroid().calcCentroid(); } //for variable 'd' - Recalculating centroids for all Clusters calcSWCSS(); } //if statement - A Data Point is eligible for transfer between Clusters. // }// syn } //for variable 'k' - Looping through all Data Points of the current Cluster. }//for variable 'c' - Looping through all the Clusters. }//for variable 'i' - Number of iterations. // syn }

    Read the article

  • Event Handlers and Automatic Postback in ASP.NET 3.5 Web Controls

    In one of last week s tutorials Creating Database-Driven ASP.NET 3.5 Input and List Web Controls you learned how to create a dynamic input web control that instead of setting values statically stored its list and values directly from the MS SQL server 2 8 database. This tutorial is a sequel to that article. It deals mostly with the server side coding aspect of dynamic web controls. It is recommended that you read the earlier tutorial first as the Visual Web Developer Project in that tutorial will be used extensively in this article.... Download a Free Trial of Windows 7 Reduce Management Costs and Improve Productivity with Windows 7

    Read the article

  • Why C# is not statically typed but F# and Haskell are?

    - by ??????? ???????
    There was a talk given by Brian Hurt about advantages and disadvantages of static typing. Brian said that by static typing he don't mean C#, but F# and Haskell. Is it because of dynamic keyword added to C#-4.0? But this feature is relatively rarely useful. By the way, there are ? and unsafeCoerse in Haskell which obviously are not the same, but something that could blown your head off in runtime similarly like exception thrown as a result of dynamic. Finally, why F# and Haskell could be named a statically typed languages and C# couldn't?

    Read the article

  • Apress Deal of the day - 5/Feb/2011

    - by TATWORTH
    Today's $10 Deal of the Day from Apress at http://www.apress.com/info/dailydeal is: Pro ASP.NET 4 in C# 2010, Fourth Edition ASP.NET 4 is the latest version of Microsoft's revolutionary ASP.NET technology. It is the principal standard for creating dynamic web pages on the Windows platform. Pro ASP.NET 4 in C# 2010 raises the bar for high-quality, practical advice on learning and deploying Microsoft's dynamic web solution. $59.99 | Published Jun 2010 | Matthew MacDonald I am reviewing this book at the moment but I was already sufficiently impressed by this book to have bought the PDF the day it was available last December.

    Read the article

  • PHP Changing Class Variables Outside of Class

    - by Jamie Bicknell
    Apologies for the wording on this question, I'm having difficulties explaining what I'm after, but hopefully it makes sense. Let's say I have a class, and I wish to pass a variable through one of it's methods, then I have another method which outputs this variable. That's all fine, but what I'm after is that if I update the variable which was originally passed, and do this outside the class methods, it should be reflected in the class. I've created a very basic example: class Test { private $var = ''; function setVar($input) { $this->var = $input; } function getVar() { echo 'Var = ' . $this->var . '<br />'; } } If I run $test = new Test(); $string = 'Howdy'; $test->setVar($string); $test->getVar(); I get Var = Howdy However, this is the flow I would like: $test = new Test(); $test->setVar($string); $string = 'Hello'; $test->getVar(); $string = 'Goodbye'; $test->getVar(); Expected output to be Var = Hello Var = Goodbye I don't know what the correct naming of this would be, and I've tried using references to the original variable but no luck. I've come across this in the past, with the PDO prepared statements, see Example #2 $stmt = $dbh->prepare("INSERT INTO REGISTRY (name, value) VALUES (?, ?)"); $stmt->bindParam(1, $name); $stmt->bindParam(2, $value); // insert one row $name = 'one'; $value = 1; $stmt->execute(); // insert another row with different values $name = 'two'; $value = 2; $stmt->execute(); I know I can change the variable to public and do the following, but it isn't quite the same as how the PDO class handles it, and I'm really looking to mimic that behaviour. $test = new Test(); $test->setVar($string); $test->var = 'Hello'; $test->getVar(); $test->var = 'Goodbye'; $test->getVar(); Any help, ideas, pointers, or advice would be greatly appreciated, thanks.

    Read the article

  • Cannot boot Windows 7 after installing Ubuntu 13.04

    - by whowantsakookie
    So I boot up my computer after installing Ubuntu 13.04. Grub correctly shows me all available boot options and I am able to boot to Ubuntu. However, when I try to boot into Windows 7, grub hangs at a purple screen. I have an HP laptop. It came with all four primary partitions taken up by the Windows bootloader, the actual Windows partition, one called HP_TOOLS, and another for HP Restore. I was able to back up and delete HP_TOOLS and the recovery partition, and change my disk type from Dynamic to Basic (GParted doesn't recognize Dynamic drives). I then booted into a live session of Ubuntu and made two partitions with GParted: one large partition for storage space that I could use between the two operating systems (sda4), and another extended partition (sda3) which contained Ubuntu (sda6) and it's swap space (sda5). It currently looks like this: I'm not sure if the second paragraph is actually relevant, I just want you to know all the variables in the equation. Thank you in advance for helping this poor noob.

    Read the article

  • HTML5, PHP, JAVA or asp?

    - by user67418
    I am building a new website for a friend of mine. Its all plain html, and a server side include. The problem is to build static pages for 500 products would not be fun to create, or maintain. So i am forced to at least put dynamic information on these pages based off a spreadsheet, or dynamic pages all together. What i want to do is have a spreadsheet that can be used to keep track of in stock quantity, sku numbers, ecc.. that way i dont have to update hundreds of pages every night. He can just edit the spreadsheet and the pages will automatically adjust. I am a busy man, and i am not asking anyone to just give me the answer. But to save some time what is more worth learning to get this done fastest. HTML5, PHP, JAVA asp, or is there somehthing else better suited?

    Read the article

  • How to automate org-refile for multiple todo

    - by lawlist
    I'm looking to automate org-refile so that it will find all of the matches and re-file them to a specific location (but not archive). I found a fully automated method of archiving multiple todo, and I am hopeful to find or create (with some help) something similar to this awesome function (but for a different heading / location other than archiving): https://github.com/tonyday567/jwiegley-dot-emacs/blob/master/dot-org.el (defun org-archive-done-tasks () (interactive) (save-excursion (goto-char (point-min)) (while (re-search-forward "\* \\(None\\|Someday\\) " nil t) (if (save-restriction (save-excursion (org-narrow-to-subtree) (search-forward ":LOGBOOK:" nil t))) (forward-line) (org-archive-subtree) (goto-char (line-beginning-position)))))) I also found this (written by aculich), which is a step in the right direction, but still requires repeating the function manually: http://stackoverflow.com/questions/7509463/how-to-move-a-subtree-to-another-subtree-in-org-mode-emacs ;; I also wanted a way for org-refile to refile easily to a subtree, so I wrote some code and generalized it so that it will set an arbitrary immediate target anywhere (not just in the same file). ;; Basic usage is to move somewhere in Tree B and type C-c C-x C-m to mark the target for refiling, then move to the entry in Tree A that you want to refile and type C-c C-w which will immediately refile into the target location you set in Tree B without prompting you, unless you called org-refile-immediate-target with a prefix arg C-u C-c C-x C-m. ;; Note that if you press C-c C-w in rapid succession to refile multiple entries it will preserve the order of your entries even if org-reverse-note-order is set to t, but you can turn it off to respect the setting of org-reverse-note-order with a double prefix arg C-u C-u C-c C-x C-m. (defvar org-refile-immediate nil "Refile immediately using `org-refile-immediate-target' instead of prompting.") (make-local-variable 'org-refile-immediate) (defvar org-refile-immediate-preserve-order t "If last command was also `org-refile' then preserve ordering.") (make-local-variable 'org-refile-immediate-preserve-order) (defvar org-refile-immediate-target nil) "Value uses the same format as an item in `org-refile-targets'." (make-local-variable 'org-refile-immediate-target) (defadvice org-refile (around org-immediate activate) (if (not org-refile-immediate) ad-do-it ;; if last command was `org-refile' then preserve ordering (let ((org-reverse-note-order (if (and org-refile-immediate-preserve-order (eq last-command 'org-refile)) nil org-reverse-note-order))) (ad-set-arg 2 (assoc org-refile-immediate-target (org-refile-get-targets))) (prog1 ad-do-it (setq this-command 'org-refile))))) (defadvice org-refile-cache-clear (after org-refile-history-clear activate) (setq org-refile-targets (default-value 'org-refile-targets)) (setq org-refile-immediate nil) (setq org-refile-immediate-target nil) (setq org-refile-history nil)) ;;;###autoload (defun org-refile-immediate-target (&optional arg) "Set current entry as `org-refile' target. Non-nil turns off `org-refile-immediate', otherwise `org-refile' will immediately refile without prompting for target using most recent entry in `org-refile-targets' that matches `org-refile-immediate-target' as the default." (interactive "P") (if (equal arg '(16)) (progn (setq org-refile-immediate-preserve-order (not org-refile-immediate-preserve-order)) (message "Order preserving is turned: %s" (if org-refile-immediate-preserve-order "on" "off"))) (setq org-refile-immediate (unless arg t)) (make-local-variable 'org-refile-targets) (let* ((components (org-heading-components)) (level (first components)) (heading (nth 4 components)) (string (substring-no-properties heading))) (add-to-list 'org-refile-targets (append (list (buffer-file-name)) (cons :regexp (format "^%s %s$" (make-string level ?*) string)))) (setq org-refile-immediate-target heading)))) (define-key org-mode-map "\C-c\C-x\C-m" 'org-refile-immediate-target) It sure would be helpful if aculich, or some other maven, could please create a variable similar to (setq org-archive-location "~/0.todo.org::* Archived Tasks") so users can specify the file and heading, which is already a part of the org-archive-subtree functionality. I'm doing a search and mark because I don't have the wherewithal to create something like org-archive-location for this setup. EDIT: One step closer -- almost home free . . . (defun lawlist-auto-refile () (interactive) (beginning-of-buffer) (re-search-forward "\* UNDATED") (org-refile-immediate-target) ;; cursor must be on a heading to work. (save-excursion (re-search-backward "\* UNDATED") ;; must be written in such a way so that sub-entries of * UNDATED are not searched; or else infinity loop. (while (re-search-backward "\* \\(None\\|Someday\\) " nil t) (org-refile) ) ) )

    Read the article

  • How do I allow mysqld to use more than 24.9% of my cpu?

    - by Joseph Yancey
    I have a Web server running on RHEL that is running Apache and MySQL. It has a Quad core 3.2Ghz Xeon CPU and 8 Gigs of RAM Most of the time, we don't have any issues at all. Our web application is very database intensive. When our usage gets pretty heavy MySQL will peg out at using 24.9% of the cpu. Most of the time, it hangs around below 5%. I have speculated that it is only using one core of the CPU and it is pegging out that core but TOP shows me in the cpu column that mysqld changes cores even while the usage stays at 24.9%. When it does this MySQL gets painfully slow as it is queuing up queries Is there some magic configuration that will tell mysql to use more cpu when it needs to? Also, any other advice on my configuration would be helpful. We run two applications on this server. One that runs Innodb but doesn't get much usage (it has been replaced by the other app), and one that runs MyIsam and gets lots of use. Overall, our whole mysql data directory is something like 13Gigs if that matters at all. Here is my config: [root@ProductionLinux root]# cat /etc/my.cnf [mysqld] server-id = 71 log-bin = /var/log/mysql/mysql-bin.log binlog-do-db = oldapplication binlog-do-db = newapplication binlog-do-db = support thread_cache_size = 30 key_buffer_size = 256M table_cache = 256 sort_buffer_size = 4M read_buffer_size = 1M skip-name-resolve innodb_data_home_dir = /usr/local/mysql/data/ innodb_data_file_path = InnoDB:100M:autoextend set-variable = innodb_buffer_pool_size=70M set-variable = innodb_additional_mem_pool_size=10M set-variable = max_connections=500 innodb_log_group_home_dir = /usr/local/mysql/data innodb_log_arch_dir = /usr/local/mysql/data set-variable = innodb_log_file_size=20M set-variable = innodb_log_buffer_size=8M innodb_flush_log_at_trx_commit = 1 log-queries-not-using-indexes log-error = /var/log/mysql/mysql-error.log mysql show variables; +---------------------------------+-----------------------------------------------------------------------------+ | Variable_name | Value | +---------------------------------+-----------------------------------------------------------------------------+ | auto_increment_increment | 1 | | auto_increment_offset | 1 | | automatic_sp_privileges | ON | | back_log | 50 | | basedir | /usr/local/mysql-standard-5.0.18-linux-x86_64-glibc23/ | | binlog_cache_size | 32768 | | bulk_insert_buffer_size | 8388608 | | character_set_client | latin1 | | character_set_connection | latin1 | | character_set_database | latin1 | | character_set_results | latin1 | | character_set_server | latin1 | | character_set_system | utf8 | | character_sets_dir | /usr/local/mysql-standard-5.0.18-linux-x86_64-glibc23/share/mysql/charsets/ | | collation_connection | latin1_swedish_ci | | collation_database | latin1_swedish_ci | | collation_server | latin1_swedish_ci | | completion_type | 0 | | concurrent_insert | 1 | | connect_timeout | 5 | | datadir | /usr/local/mysql/data/ | | date_format | %Y-%m-%d | | datetime_format | %Y-%m-%d %H:%i:%s | | default_week_format | 0 | | delay_key_write | ON | | delayed_insert_limit | 100 | | delayed_insert_timeout | 300 | | delayed_queue_size | 1000 | | div_precision_increment | 4 | | engine_condition_pushdown | OFF | | expire_logs_days | 0 | | flush | OFF | | flush_time | 0 | | | ft_max_word_len | 84 | | ft_min_word_len | 4 | | ft_query_expansion_limit | 20 | | ft_stopword_file | (built-in) | | group_concat_max_len | 1024 | | have_archive | YES | | have_bdb | NO | | have_blackhole_engine | NO | | have_compress | YES | | have_crypt | YES | | have_csv | NO | | have_example_engine | NO | | have_federated_engine | NO | | have_geometry | YES | | have_innodb | YES | | have_isam | NO | | have_ndbcluster | NO | | have_openssl | NO | | have_query_cache | YES | | have_raid | NO | | have_rtree_keys | YES | | have_symlink | YES | | init_connect | | | init_file | | | init_slave | | | innodb_additional_mem_pool_size | 10485760 | | innodb_autoextend_increment | 8 | | innodb_buffer_pool_awe_mem_mb | 0 | | innodb_buffer_pool_size | 73400320 | | innodb_checksums | ON | | innodb_commit_concurrency | 0 | | innodb_concurrency_tickets | 500 | | innodb_data_file_path | InnoDB:100M:autoextend | | innodb_data_home_dir | /usr/local/mysql/data/ | | innodb_doublewrite | ON | | innodb_fast_shutdown | 1 | | innodb_file_io_threads | 4 | | innodb_file_per_table | OFF | | innodb_flush_log_at_trx_commit | 1 | | innodb_flush_method | | | innodb_force_recovery | 0 | | innodb_lock_wait_timeout | 50 | | innodb_locks_unsafe_for_binlog | OFF | | innodb_log_arch_dir | /usr/local/mysql/data | | innodb_log_archive | OFF | | innodb_log_buffer_size | 8388608 | | innodb_log_file_size | 20971520 | | innodb_log_files_in_group | 2 | | innodb_log_group_home_dir | /usr/local/mysql/data | | innodb_max_dirty_pages_pct | 90 | | innodb_max_purge_lag | 0 | | innodb_mirrored_log_groups | 1 | | innodb_open_files | 300 | | innodb_support_xa | ON | | innodb_sync_spin_loops | 20 | | innodb_table_locks | ON | | innodb_thread_concurrency | 20 | | innodb_thread_sleep_delay | 10000 | | interactive_timeout | 28800 | | join_buffer_size | 131072 | | key_buffer_size | 268435456 | | key_cache_age_threshold | 300 | | key_cache_block_size | 1024 | | key_cache_division_limit | 100 | | language | /usr/local/mysql-standard-5.0.18-linux-x86_64-glibc23/share/mysql/english/ | | large_files_support | ON | | large_page_size | 0 | | large_pages | OFF | | license | GPL | | local_infile | ON | | locked_in_memory | OFF | | log | OFF | | log_bin | ON | | log_bin_trust_function_creators | OFF | | log_error | /var/log/mysql/mysql-error.log | | log_slave_updates | OFF | | log_slow_queries | OFF | | log_warnings | 1 | | long_query_time | 10 | | low_priority_updates | OFF | | lower_case_file_system | OFF | | lower_case_table_names | 0 | | max_allowed_packet | 1048576 | | max_binlog_cache_size | 18446744073709551615 | | max_binlog_size | 1073741824 | | max_connect_errors | 10 | | max_connections | 500 | | max_delayed_threads | 20 | | max_error_count | 64 | | max_heap_table_size | 16777216 | | max_insert_delayed_threads | 20 | | max_join_size | 18446744073709551615 | | max_length_for_sort_data | 1024 | | max_relay_log_size | 0 | | max_seeks_for_key | 18446744073709551615 | | max_sort_length | 1024 | | max_sp_recursion_depth | 0 | | max_tmp_tables | 32 | | max_user_connections | 0 | | max_write_lock_count | 18446744073709551615 | | multi_range_count | 256 | | myisam_data_pointer_size | 6 | | myisam_max_sort_file_size | 9223372036854775807 | | myisam_recover_options | OFF | | myisam_repair_threads | 1 | | myisam_sort_buffer_size | 8388608 | | myisam_stats_method | nulls_unequal | | net_buffer_length | 16384 | | net_read_timeout | 30 | | net_retry_count | 10 | | net_write_timeout | 60 | | new | OFF | | old_passwords | OFF | | open_files_limit | 2510 | | optimizer_prune_level | 1 | | optimizer_search_depth | 62 | | pid_file | /usr/local/mysql/data/ProductionLinux.pid | | port | 3306 | | preload_buffer_size | 32768 | | protocol_version | 10 | | query_alloc_block_size | 8192 | | query_cache_limit | 1048576 | | query_cache_min_res_unit | 4096 | | query_cache_size | 0 | | query_cache_type | ON | | query_cache_wlock_invalidate | OFF | | query_prealloc_size | 8192 | | range_alloc_block_size | 2048 | | read_buffer_size | 1044480 | | read_only | OFF | | read_rnd_buffer_size | 262144 | | relay_log_purge | ON | | relay_log_space_limit | 0 | | rpl_recovery_rank | 0 | | secure_auth | OFF | | server_id | 71 | | skip_external_locking | ON | | skip_networking | OFF | | skip_show_database | OFF | | slave_compressed_protocol | OFF | | slave_load_tmpdir | /tmp/ | | slave_net_timeout | 3600 | | slave_skip_errors | OFF | | slave_transaction_retries | 10 | | slow_launch_time | 2 | | socket | /tmp/mysql.sock | | sort_buffer_size | 4194296 | | sql_mode | | | sql_notes | ON | | sql_warnings | ON | | storage_engine | MyISAM | | sync_binlog | 0 | | sync_frm | ON | | sync_replication | 0 | | sync_replication_slave_id | 0 | | sync_replication_timeout | 10 | | system_time_zone | CST | | table_cache | 256 | | table_lock_wait_timeout | 50 | | table_type | MyISAM | | thread_cache_size | 30 | | thread_stack | 262144 | | time_format | %H:%i:%s | | time_zone | SYSTEM | | timed_mutexes | OFF | | tmp_table_size | 33554432 | | tmpdir | | | transaction_alloc_block_size | 8192 | | transaction_prealloc_size | 4096 | | tx_isolation | REPEATABLE-READ | | updatable_views_with_limit | YES | | version | 5.0.18-standard-log | | version_comment | MySQL Community Edition - Standard (GPL) | | version_compile_machine | x86_64 | | version_compile_os | unknown-linux-gnu | | wait_timeout | 28800 | +---------------------------------+-----------------------------------------------------------------------------+ 210 rows in set (0.00 sec)

    Read the article

  • Screen Scraping Twitter

    - by BRADINO
    I got an email today asking for help to scrape Twitter. In particular, to be able to login. So I am going to show everyone, NOT to encourage anyone to violate Twitters terms of use but as an educational blog post about how PHP and cURL can be used to post variables and store cookies. Again, I am using the cScrape class I wrote, which you can download. Step 1 First go to twitter.com and look at the source code of the login to get the form field names and the form post location. You will see that the form posts to https://twitter.com/session and the username and password fields are session[username_or_email] and session[password] respectively. Step 2 Now you are ready to login. So using the fetch function in the Scrape class you create an associative array to contain the form values you want to post. The other thing you will need to do is uncomment the lines for CURLOPT_COOKIEFILE and CURLOPT_COOKIEJAR. Cookies will be required to stay logged in and scrape around. The paths to the cookie files need to be writable by your app. Also you will need to uncomment the line about CURLOPT_FOLLOWLOCATION. $data = array('session[username_or_email]' => "bradino", 'session[password]' => "secret"); $scrape->fetch('https://twitter.com/sessions',$data); Step 1.5 Oops that didn't work. All I got back was 403 Forbidden: The server understood the request, but is refusing to fulfill it. Ahhh I see another variable called authenticity_token I bet Twitter was looking for that. So let's back up and first hit twitter.com to get the authenticity_token variable, and then make the login post request with that variable included in our array of parameters. $scrape->fetch('https://twitter.com'); $data = array('session[username_or_email]' => "bradino", 'session[password]' => "secret"); $data['authenticity_token'] = $scrape->fetchBetween('name="authenticity_token" type="hidden" value="','"',$scrape->result); $scrape->fetch('https://twitter.com/sessions',$data); echo $scrape->result; So that's basically it. Now you are logged in and can scrape around and request other pages as you normally would. Sorry it wasn't a longer post. I really do enjoy this kind of stuff so if anyone has a request, hit me up. Errors? 1) Make sure that you are properly parsing the token variable 2) Make sure that you uncommented the lines about CURLOPT_COOKIEFILE and CURLOPT_COOKIEJAR, those options need to be enabled and be sure the path set is writable by your application 3) Make sure that the path to the cookie file is writable and that it is getting data written to it 4) If you get a message about being redirected you need to uncomment the line about CURLOPT_FOLLOWLOCATION, that option needs to be enabled true

    Read the article

  • Tips &amp; Tricks: How to crawl a SSL enabled Oracle E-Business Suite

    - by Rajesh Ghosh
    Oracle E-Business Suite can be integrated with Oracle Secure Enterprise Search for a superior end user experience and enhanced data retrieval capabilities. Before end-users can perform search operations, data has to be crawled and indexed into Oracle SES server. However if the Oracle E-Business Suite instance is on SSL, some additional configurations are needed in Oracle SES server as well as in Oracle Search Modeler, before a search object can be deployed and crawled. The process involves the following steps: Step 1: Export the SSL certificate of Oracle E-Business Suite Access the Oracle E-business Suite instance from a web browser. You should be able to locate a security or certificate icon somewhere in the browser toolbar or status bar, depending on which browser you are using. Click on it and you should be able to view the certificate as well as export it to a local file. While exporting make sure that you use “DER encoded” format. Step 2: Import the SSL certificate into Oracle Secure Enterprise server’s java key-store Oracle SES (10.1.8.4) by default ships a JDK under $ORACLE_HOME. The Oracle SES mid-tier uses this jdk to start the oc4j container services. In this step the Oracle E-Business Suite’s SSL certificate which has been exported in step #1, has to be imported into the Oracle SES server’s java key store. Perform the following: Copy the certificate file onto the server where Oracle SES server is running; under $ORACLE_HOME/jdk/jre/lib/security/cacerts. “ORACLE_HOME” points to the Oracle SES oracle home. Set the JAVA_HOME environment variable to $ORACLE_HOME/jdk. Append $JAVA_HOME/bin to the PATH environment variable Issue the command :  “keytool -import -keystore keystore.jks -trustcacerts -alias myOHS –file ebs.crt” . Please substitute “ebs.crt” with the name of the certificate file you copied in step #2.1. The default key-store password “changeit”. Enter the same when prompted. If successful this process will end with a message saying “certificate successfully imported”. Step 3: Import the SSL certificate into Search Modeler java key-store Unlike Oracle SES, Search Modeler is not shipped with a bundled JDK. If you are using standalone OC4J, then you actually use an external JDK to start the oc4j container services. If you are using IAS instance then the JDK comes bundled with the IAS installation. Perform the following: Copy the certificate file onto the server where Search Modeler application is running; under $JDK_HOME/jre/lib/security/cacerts. “JDK_HOME” points to the JDK directory depending on whether you are using external JDK or a bundled one. Set the JAVA_HOME environment variable to JDK directory. Append $JAVA_HOME/bin to the PATH environment variable Issue the command :  “keytool -import -keystore keystore.jks -trustcacerts -alias myOHS –file ebs.crt” . Please substitute “ebs.crt” with the name of the certificate file you copied in step #3.1. The default key-store password “changeit”. Enter the same when prompted. If successful this process will end with a message saying “certificate successfully imported”. Once you have completed the above steps successfully, you can deploy the search objects using Search Modeler and then start crawling them as well.

    Read the article

< Previous Page | 223 224 225 226 227 228 229 230 231 232 233 234  | Next Page >