Search Results

Search found 15383 results on 616 pages for 'home automation'.

Page 222/616 | < Previous Page | 218 219 220 221 222 223 224 225 226 227 228 229  | Next Page >

  • Get page permalink and title outside the loop in wordpress

    - by Aakash Chakravarthy
    Hello, How to Get page permalink and title outside the loop in wordpress. I have a function like function get_post_info(){ $post; $permalink = get_permalink($post->ID); $title = get_the_title($post->ID); return $post_info('url' => $permalink, 'title' => $title); } when this function called within the loop, it returns the post's title and url. When it is called outside the loop. It is not returning the current page's title and url. When called in home page it should return the home page's title and url How to get like this ? instead this function returns the latest posts title and url

    Read the article

  • how do i load a csv file in rails from a migrate usiing load data local infile ?

    - by Chris Drappier
    Hi All, I have my csv file in my public folder, and i'm trying to load it from a migration, but I get a file not found error using this script : ActiveRecord::Base.connection.execute( "load data local infile '#{RAILS_ROOT}/public/muds_variables.csv' into table muds_variables " + "fields terminated by ',' " + "lines terminated by '\n' " + "(variable_name, definition)") I've checked and re-checked the file path, and that's definitely where it lives, I've also tried it just using the file name without any of the path, and a few other combos, but I can't make it work :(. can anyone help me out with this? here's the error : Mysql::Error: File '/home/chris/rails_projects/muds/public/muds_variables.csv' not found (Errcode: 2): load data local infile '/home/chris/rails_projects/muds/public/muds_variables.csv' into table muds_variables fields terminated by ',' lines terminated by ' ' (variable_name, definition) -C

    Read the article

  • Remove address fields from Mac address book using Applescript

    - by cemregr
    Hello, A Facebook Sync app filled the address fields of my Mac Address Book contacts with their cities. Having tons of people who have useless addresses makes it difficult to search by people on Google Maps app (ending up scrolling through many many people- I only want to see those who have proper addresses entered). I want to clear all the home address fields in my address book using applescript. I wrote something small but couldn't get it to work, probably needs some help from somebody who knows applescript :) tell application "Address Book" repeat with this_person in every person repeat with this_address in every address of this_person if label of this_address is "home" then remove this_address from addresses of this_person end if end repeat end repeat end tell I tried to deduct the logic of multiple addresses/phones from other scripts but could only find adding them, not removing them. Thanks! :)

    Read the article

  • App store name and info.plist

    - by user346194
    Hi guys, I've just completed my first app and having tested I'm ready for submission. However, despite numerous web searches and reading, I'm struggling to finalise the method required to enable me to have a different name on the app store to the name that appears under the app on the device home screen. In the plist.info file there is reference to bundle display name, executable name, bundle name, bundle identifier, product name etc. So, for example, say I would like the app store name to display as: HELLO WORLD and I would like the name below the icon on the device home screen to display as: HELLO How should I enter the data in the plist.info file to achieve the above? Many thanks in advance for your help. Gav.

    Read the article

  • Firefox doesn't show my CSS

    - by vtortola
    Hi, I have a strange problem, Firefox doesn't show the CSS of the page I'm doing, but Internet Explorer does. I have tried at home and at one of my friend's home, and it happens in both. But, if I go to the Firefox Web Developer toolbar (i have it installed) and select CSS=Edit CSS, then the styles appears appears in the page and in the editor! As soon I close it, they disappears again. I have no idea what the problem is :( Do you have any idea about what could be the problem? thanks in advance.

    Read the article

  • NFS issue brings down entire vSphere ESX estate

    - by growse
    I experienced an odd issue this morning where an NFS issue appeared to have taken down the majority of my VMs hosted on a small vSphere 5.0 estate. The infrastructure itself is 4x IBM HS21 blades running around 20 VMs. The storage is provided by a single HP X1600 array with attached D2700 chassis running Solaris 11. There's a couple of storage pools on this which are exposed over NFS for the storage of the VM files, and some iSCSI LUNs for things like MSCS shared disks. Normally, this is pretty stable, but I appreciate the lack of resiliancy in having a single X1600 doing all the storage. This morning, in the logs of each ESX host, at around 0521 GMT I saw a lot of entries like this: 2011-11-30T05:21:54.161Z cpu2:2050)NFSLock: 608: Stop accessing fd 0x41000a4cf9a8 3 2011-11-30T05:21:54.161Z cpu2:2050)NFSLock: 608: Stop accessing fd 0x41000a4dc9e8 3 2011-11-30T05:21:54.161Z cpu2:2050)NFSLock: 608: Stop accessing fd 0x41000a4d3fa8 3 2011-11-30T05:21:54.161Z cpu2:2050)NFSLock: 608: Stop accessing fd 0x41000a4de0a8 3 [....] 2011-11-30T06:16:07.042Z cpu0:2058)WARNING: NFS: 283: Lost connection to the server 10.13.111.197 mount point /sastank/VMStorage, mounted as f0342e1c-19be66b5-0000-000000000000 ("SAStank") 2011-11-30T06:17:01.459Z cpu2:4011)NFS: 292: Restored connection to the server 10.13.111.197 mount point /sastank/VMStorage, mounted as f0342e1c-19be66b5-0000-000000000000 ("SAStank") 2011-11-30T06:25:17.887Z cpu3:2051)NFSLock: 608: Stop accessing fd 0x41000a4c2b28 3 2011-11-30T06:27:16.063Z cpu3:4011)NFSLock: 568: Start accessing fd 0x41000a4d8928 again 2011-11-30T06:35:30.827Z cpu1:2058)WARNING: NFS: 283: Lost connection to the server 10.13.111.197 mount point /tank/ISO, mounted as 5acdbb3e-410e56e3-0000-000000000000 ("ISO (1)") 2011-11-30T06:36:37.953Z cpu6:2054)NFS: 292: Restored connection to the server 10.13.111.197 mount point /tank/ISO, mounted as 5acdbb3e-410e56e3-0000-000000000000 ("ISO (1)") 2011-11-30T06:40:08.242Z cpu6:2054)NFSLock: 608: Stop accessing fd 0x41000a4c3e68 3 2011-11-30T06:40:34.647Z cpu3:2051)NFSLock: 568: Start accessing fd 0x41000a4d8928 again 2011-11-30T06:44:42.663Z cpu1:2058)WARNING: NFS: 283: Lost connection to the server 10.13.111.197 mount point /sastank/VMStorage, mounted as f0342e1c-19be66b5-0000-000000000000 ("SAStank") 2011-11-30T06:44:53.973Z cpu0:4011)NFS: 292: Restored connection to the server 10.13.111.197 mount point /sastank/VMStorage, mounted as f0342e1c-19be66b5-0000-000000000000 ("SAStank") 2011-11-30T06:51:28.296Z cpu5:2058)NFSLock: 608: Stop accessing fd 0x41000ae3c528 3 2011-11-30T06:51:44.024Z cpu4:2052)NFSLock: 568: Start accessing fd 0x41000ae3b8e8 again 2011-11-30T06:56:30.758Z cpu4:2058)WARNING: NFS: 283: Lost connection to the server 10.13.111.197 mount point /sastank/VMStorage, mounted as f0342e1c-19be66b5-0000-000000000000 ("SAStank") 2011-11-30T06:56:53.389Z cpu7:2055)NFS: 292: Restored connection to the server 10.13.111.197 mount point /sastank/VMStorage, mounted as f0342e1c-19be66b5-0000-000000000000 ("SAStank") 2011-11-30T07:01:50.350Z cpu6:2054)ScsiDeviceIO: 2316: Cmd(0x41240072bc80) 0x12, CmdSN 0x9803 to dev "naa.600508e000000000505c16815a36c50d" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0. 2011-11-30T07:03:48.449Z cpu3:2051)NFSLock: 608: Stop accessing fd 0x41000ae46b68 3 2011-11-30T07:03:57.318Z cpu4:4009)NFSLock: 568: Start accessing fd 0x41000ae48228 again (I've put a complete dump from one of the hosts on pastebin: http://pastebin.com/Vn60wgTt) When I got in the office at 9am, I saw various failures and alarms and troubleshooted the issue. It turned out that pretty much all of the VMs were inaccessible, and that the ESX hosts either were describing each VM as 'powered off', 'powered on', or 'unavailable'. The VMs described as 'powered on' where not in any way reachable or responding to pings, so this may be lies. There's absolutely no indication on the X1600 that anything was awry, and nothing on the switches to indicate any loss of connectivity. I only managed to resolve the issue by rebooting the ESX hosts in turn. I have a number of questions: What the hell happened? If this was a temporary NFS failure, why did it put the ESX hosts into a state from which a reboot was the only recovery? In the future, when the NFS server goes a little off-piste, what would be the best approach to add some resilience? I've been looking at budgeting for next year and potentially have budget to purchase another X1600/D2700/disks, would an identical mirrored disk setup help to mitigate these sorts of failures automatically? Edit (Added requested details) To expand with some details as requested: The X1600 has 12x 1TB disks lumped together in mirrored pairs as tank, and the D2700 (connected with a mini SAS cable) has 12x 300GB 10k SAS disks lumped together in mirrored pairs as sastank zpool status pool: rpool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 c7t0d0s0 ONLINE 0 0 0 errors: No known data errors pool: sastank state: ONLINE scan: scrub repaired 0 in 74h21m with 0 errors on Wed Nov 30 02:51:58 2011 config: NAME STATE READ WRITE CKSUM sastank ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c7t14d0 ONLINE 0 0 0 c7t15d0 ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 c7t16d0 ONLINE 0 0 0 c7t17d0 ONLINE 0 0 0 mirror-2 ONLINE 0 0 0 c7t18d0 ONLINE 0 0 0 c7t19d0 ONLINE 0 0 0 mirror-3 ONLINE 0 0 0 c7t20d0 ONLINE 0 0 0 c7t21d0 ONLINE 0 0 0 mirror-4 ONLINE 0 0 0 c7t22d0 ONLINE 0 0 0 c7t23d0 ONLINE 0 0 0 mirror-5 ONLINE 0 0 0 c7t24d0 ONLINE 0 0 0 c7t25d0 ONLINE 0 0 0 errors: No known data errors pool: tank state: ONLINE scan: scrub repaired 0 in 17h28m with 0 errors on Mon Nov 28 17:58:19 2011 config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c7t1d0 ONLINE 0 0 0 c7t2d0 ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 c7t3d0 ONLINE 0 0 0 c7t4d0 ONLINE 0 0 0 mirror-2 ONLINE 0 0 0 c7t5d0 ONLINE 0 0 0 c7t6d0 ONLINE 0 0 0 mirror-3 ONLINE 0 0 0 c7t8d0 ONLINE 0 0 0 c7t9d0 ONLINE 0 0 0 mirror-4 ONLINE 0 0 0 c7t10d0 ONLINE 0 0 0 c7t11d0 ONLINE 0 0 0 mirror-5 ONLINE 0 0 0 c7t12d0 ONLINE 0 0 0 c7t13d0 ONLINE 0 0 0 errors: No known data errors The filesystem exposed over NFS for the primary datastore is sastank/VMStorage zfs list NAME USED AVAIL REFER MOUNTPOINT rpool 45.1G 13.4G 92.5K /rpool rpool/ROOT 2.28G 13.4G 31K legacy rpool/ROOT/solaris 2.28G 13.4G 2.19G / rpool/dump 15.0G 13.4G 15.0G - rpool/export 11.9G 13.4G 32K /export rpool/export/home 11.9G 13.4G 32K /export/home rpool/export/home/andrew 11.9G 13.4G 11.9G /export/home/andrew rpool/swap 15.9G 29.2G 123M - sastank 1.08T 536G 33K /sastank sastank/VMStorage 1.01T 536G 1.01T /sastank/VMStorage sastank/comstar 71.7G 536G 31K /sastank/comstar sastank/comstar/sql_tempdb 6.31G 536G 6.31G - sastank/comstar/sql_tx_data 65.4G 536G 65.4G - tank 4.79T 578G 42K /tank tank/FTP 269G 578G 269G /tank/FTP tank/ISO 28.8G 578G 25.9G /tank/ISO tank/backupstage 2.64T 578G 2.49T /tank/backupstage tank/cifs 301G 578G 297G /tank/cifs tank/comstar 1.54T 578G 31K /tank/comstar tank/comstar/msdtc 1.07G 579G 32.8M - tank/comstar/quorum 577M 578G 47.9M - tank/comstar/sqldata 1.54T 886G 304G - tank/comstar/vsphere_lun 2.09G 580G 22.2M - tank/mcs-asset-repository 7.01M 578G 6.99M /tank/mcs-asset-repository tank/mscs-quorum 55K 578G 36K /tank/mscs-quorum tank/sccm 16.1G 578G 12.8G /tank/sccm As for the networking, all connections between the X1600, the Blades and the switch are either LACP or Etherchannel bonded 2x 1Gbit links. Switch is a single Cisco 3750. Storage traffic sits on its own VLAN segregated from VM machine traffic.

    Read the article

  • Why does xdebug crash apache on every XAMPP install I've tried?

    - by Ian Cook
    I've installed the Windows XAMPP package on three separate computers, 2 running Windows Vista 32 bit ( 1 Ultimate / 1 Home Premium ) and 1 running Windows Vista 64 Home Premium. After enabling xdebug in php.ini and restarting apache, viewing the default XAMPP localhost index causes apache to crash in the same way every time, reporting 'php_xdebug.dll' as the Fault Module Name. Here's the full report from the Windows Crash Reporter thing: Problem signature: Problem Event Name: APPCRASH Application Name: apache.exe Application Version: 2.2.9.0 Application Timestamp: 4853f994 Fault Module Name: php_xdebug.dll Fault Module Version: 2.0.3.0 Fault Module Timestamp: 47fcd9b9 Exception Code: c0000005 Exception Offset: 00008493 OS Version: 6.0.6001.2.1.0.768.3 Locale ID: 1033 Additional Information 1: a34a Additional Information 2: c9c5f4fd744690d388ab9d5b3eb051a7 Additional Information 3: cb2e Additional Information 4: 650bb5690556a17e911375b94d3e16f0 I've tried Googling this issue but haven't found any resolution, only reports of similar errors. EDIT: I enabled the extension line for php_xdebug.dll and that seems to have stopped the crashing so far.

    Read the article

  • Parsing "true" and "false" using Boost.Spirit.Lex and Boost.Spirit.Qi

    - by Andrew Ross
    As the first stage of a larger grammar using Boost.Spirit I'm trying to parse "true" and "false" to produce the corresponding bool values, true and false. I'm using Spirit.Lex to tokenize the input and have a working implementation for integer and floating point literals (including those expressed in a relaxed scientific notation), exposing int and float attributes. Token definitions #include <boost/spirit/include/lex_lexertl.hpp> namespace lex = boost::spirit::lex; typedef boost::mpl::vector<int, float, bool> token_value_type; template <typename Lexer> struct basic_literal_tokens : lex::lexer<Lexer> { basic_literal_tokens() { this->self.add_pattern("INT", "[-+]?[0-9]+"); int_literal = "{INT}"; // To be lexed as a float a numeric literal must have a decimal point // or include an exponent, otherwise it will be considered an integer. float_literal = "{INT}(((\\.[0-9]+)([eE]{INT})?)|([eE]{INT}))"; literal_true = "true"; literal_false = "false"; this->self = literal_true | literal_false | float_literal | int_literal; } lex::token_def<int> int_literal; lex::token_def<float> float_literal; lex::token_def<bool> literal_true, literal_false; }; Testing parsing of float literals My real implementation uses Boost.Test, but this is a self-contained example. #include <string> #include <iostream> #include <cmath> #include <cstdlib> #include <limits> bool parse_and_check_float(std::string const & input, float expected) { typedef std::string::const_iterator base_iterator_type; typedef lex::lexertl::token<base_iterator_type, token_value_type > token_type; typedef lex::lexertl::lexer<token_type> lexer_type; basic_literal_tokens<lexer_type> basic_literal_lexer; base_iterator_type input_iter(input.begin()); float actual; bool result = lex::tokenize_and_parse(input_iter, input.end(), basic_literal_lexer, basic_literal_lexer.float_literal, actual); return result && std::abs(expected - actual) < std::numeric_limits<float>::epsilon(); } int main(int argc, char *argv[]) { if (parse_and_check_float("+31.4e-1", 3.14)) { return EXIT_SUCCESS; } else { return EXIT_FAILURE; } } Parsing "true" and "false" My problem is when trying to parse "true" and "false". This is the test code I'm using (after removing the Boost.Test parts): bool parse_and_check_bool(std::string const & input, bool expected) { typedef std::string::const_iterator base_iterator_type; typedef lex::lexertl::token<base_iterator_type, token_value_type > token_type; typedef lex::lexertl::lexer<token_type> lexer_type; basic_literal_tokens<lexer_type> basic_literal_lexer; base_iterator_type input_iter(input.begin()); bool actual; lex::token_def<bool> parser = expected ? basic_literal_lexer.literal_true : basic_literal_lexer.literal_false; bool result = lex::tokenize_and_parse(input_iter, input.end(), basic_literal_lexer, parser, actual); return result && actual == expected; } but compilation fails with: boost/spirit/home/qi/detail/assign_to.hpp: In function ‘void boost::spirit::traits::assign_to(const Iterator&, const Iterator&, Attribute&) [with Iterator = __gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, Attribute = bool]’: boost/spirit/home/lex/lexer/lexertl/token.hpp:434: instantiated from ‘static void boost::spirit::traits::assign_to_attribute_from_value<Attribute, boost::spirit::lex::lexertl::token<Iterator, AttributeTypes, HasState>, void>::call(const boost::spirit::lex::lexertl::token<Iterator, AttributeTypes, HasState>&, Attribute&) [with Attribute = bool, Iterator = __gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, AttributeTypes = boost::mpl::vector<int, float, bool, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na>, HasState = mpl_::bool_<true>]’ ... backtrace of instantiation points .... boost/spirit/home/qi/detail/assign_to.hpp:79: error: no matching function for call to ‘boost::spirit::traits::assign_to_attribute_from_iterators<bool, __gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, void>::call(const __gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >&, const __gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >&, bool&)’ boost/spirit/home/qi/detail/construct.hpp:64: note: candidates are: static void boost::spirit::traits::assign_to_attribute_from_iterators<bool, Iterator, void>::call(const Iterator&, const Iterator&, char&) [with Iterator = __gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >] My interpretation of this is that Spirit.Qi doesn't know how to convert a string to a bool - surely that's not the case? Has anyone else done this before? If so, how?

    Read the article

  • Ubuntu to Ubuntu VNC over SSH tunnel

    - by rxt
    I have a Linux Ubuntu desktop at home, ssh enabled, vnc server installed, router rule configured. It all works, and at home I can connect via the local network from my Mac. From the outside I can login via ssh. I've configured putty as follows: session: host name and port number connection ssh tunnel: forwarded ports: L5900|192.168.0.23 the local address is: 192.168.1.45 When I make the connection I can login to the remote machine. Then I open Remote Desktop Viewer. I click connect protocol: vnc host: ? use host as ssh tunnel: ? I don't know what to use for the last two options. Which ip-addresses should I use?

    Read the article

  • First Look - Oracle Data Mining

    - by kimberly.billings
    In his blog, JT on EDM, James Taylor shares his analysis of Oracle Data Mining, including its new GUI and Exadata integration. While Oracle Data Mining has been available for a while, it is now easier to access and try via the Amazon Cloud. Using the Oracle 11gR2 Data Mining Amazon Machine Image (AMI), you can launch an Oracle Data Mining-enabled instance directly through Amazon Web Services (AWS) and connect to it using the Oracle Data Miner graphical user interface. The new Oracle Data Mining GUI, which will be available to beta customers soon, provides more graphics, the ability to define, save and share analytical "work flows" to solve business problems, and provides more automation and simplicity. Taylor comments that, "the UI looks to have a nice look and feel including graphical model development flows, easy access to the data, nice little micro graphs when browsing data records and more." On using Oracle Data Mining with Exadata, Taylor writes, "Oracle says that the use of the ODM routines in the Exadata kernel is faster than running a native ODM model in the database by a factor of 2 and that this increases as more joins are used. This could mean that ODM outperforms even third party in-database analytics." Taylor concludes his blog with a positive overall review, stating that "ODM is a nice product for Oracle database customers and well worth looking into. The new UI will only make it more so." Read the blog. var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); try { var pageTracker = _gat._getTracker("UA-13185312-1"); pageTracker._trackPageview(); } catch(err) {}

    Read the article

  • create manually parameter ReturnUrl

    - by user276640
    i have view like 'home/details/5', it can be access by anonymous user. but there is button, which can be pressed only by registered users. no problem, i can look into Request.IsAuthenticated , and if anonymous i show button login instead of secret button but the problem- when press login i can lose address and parameters of page. how can i create login button and pass a parameter ReturnUrl ? something like <%= Html.ActionLink("enter to buy", "LogOn", "Account", new { ReturnUrl = path to view with route value })%> i see only stupid solution <%= Html.ActionLink("enter to buy", "LogOn", "Account", new { ReturnUrl = "home/details/" + ViewContext.RouteData.Values["id"] })%> but i don't like to hard code names of controller

    Read the article

  • sem_open() error: "undefined reference to sem_open()" on linux (Ubuntu 10.10)

    - by Robin
    So I am getting the error: "undefined reference to sem_open()" even though I have include the semaphore.h header. The same thing is happening for all my pthread function calls (mutex, pthread_create, etc). Any thoughts? I am using the following command to compile: g++ '/home/robin/Desktop/main.cpp' -o '/home/robin/Desktop/main.out' #include <iostream> using namespace std; #include <pthread.h> #include <semaphore.h> #include <fcntl.h> const char *serverControl = "/serverControl"; sem_t* semID; int main ( int argc, char *argv[] ) { //create semaphore used to control servers semID = sem_open(serverControl,O_CREAT,O_RDWR,0); return 0; }

    Read the article

  • MVC ActionLink omits action when action equals default route value

    - by rjygraham
    I have the following routes defined for my application: routes.MapRoute( "Referral", // Route name "{referralCode}", // URL with parameters new { controller = "Home", action = "Index" } // Parameter defaults ); routes.MapRoute( "Default", // Route name "{controller}/{action}", // URL with parameters new { controller = "Home", action = "Index" } // Parameter defaults ); And I'm trying to create an ActionLink to go on the Index action on my AdminController: @Html.ActionLink("admin", "Index", "Admin") However, when the view is executed the ActionLink renders as (Index action value is omitted): <a href="/Admin">admin</a> Normally this would be ok, but it's causing a collision with the "Referral" route. NOTE: If I instead use ActionLink to render a different action like "Default," the ActionLink renders correctly: <a href="/Admin/Default">admin</a> The fact that the "Default" action renders correctly leads me to believe the problem has to do with the default value specified for the route. Is there anyway to force ActionLink to render the "Index" action as well?

    Read the article

  • How to host customer developed code server side

    - by user963263
    I'm developing a multi-tenant web application, most likely using ASP.NET MVC5 and Web API. I have used business applications in the past where it was possible to upload custom DLL's or paste in custom code to a GUI to have custom functions run server side. These applications were self hosted and single-tenant though so the customer developed bits didn't impact other clients. I want to host the multi-tenant web application myself and allow customers to upload custom code that will run server side. This could be for things like custom web services that client side JavaScript could interact with, or it could be for automation steps that they want triggered server side asynchronously when a user takes a particular action. Additionally, I want to expose an API that allows customers' code to interact with data specific to the web application itself. Client code may need to be "wrapped" so that it has access to appropriate references - to our custom API and maybe to a white list of approved libraries. There are several issues to consider - security, performance (infinite loops, otherwise poorly written code, load balancing, etc.), receive compiled DLL's or require raw code, etc. Is there an established pattern for this sort of thing or a sample project anyone can point to? Or any general recommendations?

    Read the article

  • Very unusual and weird problem with gVim 7.2

    - by Fabian
    After installing gVim and running gvim from the run window, if I were to type :cd followed by a tab, I will get \AppData, \Application Data, etc. Which basically means I'm at my $HOME directory (C:\Users\Fabian). The weird thing is I do not have a \Application Data folder there. But if I were to run gvim.exe from its installation folder, and I type :cd followed by tab, I would get \autoload, \colors, etc. which means I'm at the installation folder. And if I were to pin gvim.exe on to taskbar, upon launch and typing :cd then tab, I will get \Dictionaries and upon hitting tab again I get a beep. I think for the last scenario, I'm at some Adobe folder. Anybody knows how to fix this weird issue? I'd like to pin it to taskbar and upon launch, start in the $HOME directory (C:\Users\Fabian).

    Read the article

  • Redirect subdomain to subdomain on new domain

    - by Ali
    Hello, I own 2 domains sio-india.org and sio-india.com What i want to do is redirect all the subdomains from 1st domain to 2nd domain. eg. home.sio-india.org to home.sio-india.com but i dont want to redirect sio-india.org to sio-india.com and also dont want to redirect www.sio-india.org to www.sio-india.com Please help I am using this code in htaccess but it is not working. RewriteCond %{HTTP_HOST} ^(.*)sio-india\.org$ [NC] RewriteRule ^(.*)$ http://%1sio-india.com/$1 [R=301,L] Please hepl me I am stuck.

    Read the article

  • Need Help on a custom HTML Helper

    - by LeeHull
    I'm trying to create a custom HTML Helper to help simplify my masterpages menu, however it is not rendering on the HTML when I use it.. I'm thinking I will need to create a partial view, any ideas? I did this.. public static string CreateAdminMenuLink(this HtmlHelper helper, string caption, string link) { var lnk = TagBuilder("a"); lnk.SetInnerText(caption); lnk.MergeAttribute("href", target); return lnk.ToString(TagRenderMode.SelfClosing); } Now in my View, i have <% Html.CreateAdminMenuLink("Home", "~/Page/Home"); %> but when I look at the source, its empty.. tried adding <% using (Html.BeginForm()) % and it adds a form.. but the link still doesnt come up.. debugged and the string works when i look at the watch, but does not render.. Any ideas?

    Read the article

  • Developing a Configurable Pricing Program

    - by Ben DeMott
    The organization I work at has some interesting requirements when it comes to pricing for online commerce. Currently the developers write different 'pricing rules' and those rules can be applied to our items based on attributes of the items. For Example: INPUTS: [cost, sug_retail, discontinued, warehouse_qty, orderable_qty, brand, type, days_available, shipping_rate, weight, map_protected, map_discount] MATCH: brand=x, warehouse_qty 1, discontinued=True, map_protected=False SET: retail_price = (sug_retail * 0.95), offer_price1 = (cost * 1.25 + shipping_rate) I am looking to allow the merchandising team to have more control over the pricing and formulas - they are afterall technical enough to write excel formulas. I've been looking at writing a desktop application that uses something like numexpr http://code.google.com/p/numexpr/ or http://sympy.org/en/index.html to allow non-programmers to integrate their own logic into our pricing backend. We have multiple price-tiers we have to set, for multiple markets, so an elegant solution is needed. It's getting frustrating for the dev team to continually tweak/manage all of the pricing rules (we sell over 200 brands in 3 markets). My question is; does this seem like a decent approach? Can you think of a better way to parse string-mathematical-grammer? Can you think of a different way for users to provide formula's to integrate into a automated pricing system? Does anyone know of any examples of existing applications that do this? Excel, and Access are out of the question - the volume of data we manipulate has already proven the need to automate it - now we just need some visibility into that automation.

    Read the article

  • Linux: file recovery (Urgent) [closed]

    - by Ashine
    Hi Firends, I desperately need some help regarding problem I am facing now. While creating a softlink for a very important file I gave the reverse command by mistake. Instead of giving it "ln target linkname" I have given it 'ln linkname target'. This has resulted in references pointing to target files are now pointing to links and the actual refernces to target files are lost. How can I recover the files back. "/home/user/data1" was original file location. "/home/user/db2" was the desired softlink for this data. I haveto give "ln data1 db2" but I have given 'ln db2 data1'. This has resulted in 'data1' being now pointing towards 'db2' and the actual data in 'data1' can not be retrieved. Some one please help. Thanks in advance.

    Read the article

  • Why Oracle Data Integrator for Big Data?

    - by Mala Narasimharajan
    Big Data is everywhere these days - but what exactly is it? It’s data that comes from a multitude of sources – not only structured data, but unstructured data as well.  The sheer volume of data is mindboggling – here are a few examples of big data: climate information collected from sensors, social media information, digital pictures, log files, online video files, medical records or online transaction records.  These are just a few examples of what constitutes big data.   Embedded in big data is tremendous value and being able to manipulate, load, transform and analyze big data is key to enhancing productivity and competitiveness.  The value of big data lies in its propensity for greater in-depth analysis and data segmentation -- in turn giving companies detailed information on product performance, customer preferences and inventory.  Furthermore, by being able to store and create more data in digital form, “big data can unlock significant value by making information transparent and usable at much higher frequency." (McKinsey Global Institute, May 2011) Oracle's flagship product for bulk data movement and transformation, Oracle Data Integrator, is a critical component of Oracle’s Big Data strategy. ODI provides automation, bulk loading, and validation and transformation capabilities for Big Data while minimizing the complexities of using Hadoop.  Specifically, the advantages of ODI in a Big Data scenario are due to pre-built Knowledge Modules that drive processing in Hadoop. This leverages the graphical UI to load and unload data from Hadoop, perform data validations and create mapping expressions for transformations.  The Knowledge Modules provide a key jump-start and eliminate a significant amount of Hadoop development.  Using Oracle Data Integrator together with Oracle Big Data Connectors, you can simplify the complexities of mapping, accessing, and loading big data (via NoSQL or HDFS) but also correlating your enterprise data – this correlation may require integrating across heterogeneous and standards-based environments, connecting to Oracle Exadata, or sourcing via a big data platform such as Oracle Big Data Appliance. To learn more about Oracle Data Integration and Big Data, download our resource kit to see the latest in whitepapers, webinars, downloads, and more… or go to our website on www.oracle.com/bigdata

    Read the article

  • Curl_errno=55, "Failed sending network data."

    - by 4dplane
    Hi all: I have a php script that updates a database via an api. This script works on one server but not on another. Both servers have curl enabled and they have php 5.2.6 or above. The error happens in the do_put() method. The rest of the script seems to be fine. I have found that: curl_errno= 55 = "Failed sending network data". curl_error= select/poll returned error <?php //phpinfo(); require_once "class.DelveAuthUtil.php"; $access_key = ""; $secret = "W"; $org_id = ""; $media_id = ""; $new_tag_name = "uvideo"; $assign_tag_to_media_url = "http://api..com/rest/organizations/$org_id/media/$media_id/properties/tags/$new_tag_name"; $signed_create_new_tag_url = DAU::authenticate_request("PUT", $assign_tag_to_media_url, $access_key, $secret); # perform the creation of the new custom property $put_response = do_put($signed_create_new_tag_url); # Execute the POST function do_post($url, $params=array()) { # Combine parameters $param_string = ""; foreach ($params as $key => $value) { $value = urlencode($value); $param_string = $param_string . "&$key=$value"; } // Get the curl session object $session = curl_init($url); // Set the POST options. curl_setopt ($session, CURLOPT_POST, true); curl_setopt ($session, CURLOPT_POSTFIELDS, $param_string); curl_setopt($session, CURLOPT_HEADER, false); curl_setopt($session, CURLOPT_RETURNTRANSFER, true); // Do the POST and then close the session $response = curl_exec($session); curl_close($session); return $response; } # Execute the PUT function do_put($url) { // Get the curl session object $session = curl_init($url); print(" <br> session= " . $session . "<br>"); // Set the PUT options. print ("opt0= " . curl_setopt($session, CURLOPT_VERBOSE, TRUE) . "<br>"); print ("opt1= " . curl_setopt ($session, CURLOPT_PUT, true) . "<br>"); print ("opt2= " . curl_setopt ($session, CURLOPT_HEADER, false) . "<br>"); print ("opt3= " . curl_setopt ($session, CURLOPT_RETURNTRANSFER, true) . "<br>"); // Do the PUT and then close the session $response = curl_exec($session); if (curl_errno($session)) { print ( "curl_errno= " . curl_errno($session). "<br>"); print( "curl_error= " . curl_error($session) . "<br>"); } else { curl_close($session); } } I have small lead in the Apache log - there is an SSL issue that I do not know how to resolve. [Thu Mar 25 15:57:58 2010] [warn] Init: You should not use name-based virtual hosts in conjunction with SSL!! [Thu Mar 25 15:57:58 2010] [notice] Apache/2.2.3 (CentOS) configured -- resuming normal operations [Thu Mar 25 15:58:59 2010] [notice] caught SIGTERM, shutting down [Thu Mar 25 15:58:59 2010] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec) [Thu Mar 25 15:58:59 2010] [warn] RSA server certificate is a CA certificate (BasicConstraints: CA == TRUE !?) [Thu Mar 25 15:58:59 2010] [warn] RSA server certificate CommonName (CN) `yourvps.a2hosting.com' does NOT match server name!? [Thu Mar 25 15:58:59 2010] [warn] RSA server certificate is a CA certificate (BasicConstraints: CA == TRUE !?) [Thu Mar 25 15:58:59 2010] [warn] RSA server certificate CommonName (CN) `yourvps.a2hosting.com' does NOT match server name!? [Thu Mar 25 15:58:59 2010] [warn] Init: SSL server IP/port conflict: my1.com:443 (/home/httpd/my.com/conf/kloxo.my1.com:69) vs. elggtest.my1.com:443 (/home/httpd/elggtest.my1.com/conf/kloxo.elggtest.my1.com:71) [Thu Mar 25 15:58:59 2010] [warn] Init: You should not use name-based virtual hosts in conjunction with SSL!! [Thu Mar 25 15:58:59 2010] [notice] Digest: generating secret for digest authentication ... [Thu Mar 25 15:58:59 2010] [notice] Digest: done [Thu Mar 25 15:59:00 2010] [warn] RSA server certificate is a CA certificate (BasicConstraints: CA == TRUE !?) [Thu Mar 25 15:59:00 2010] [warn] RSA server certificate CommonName (CN) `yourvps.a2hosting.com' does NOT match server name!? [Thu Mar 25 15:59:00 2010] [warn] RSA server certificate is a CA certificate (BasicConstraints: CA == TRUE !?) [Thu Mar 25 15:59:00 2010] [warn] RSA server certificate CommonName (CN) `yourvps.a2hosting.com' does NOT match server name!? [Thu Mar 25 15:59:00 2010] [warn] Init: SSL server IP/port conflict: my1.com:443 (/home/httpd/my1.com/conf/kloxo.my1.com:69) vs. elggtest.my1.com:443 (/home/httpd/elggtest.my1.com/conf/kloxo.elggtest.my1.com:71) [Thu Mar 25 15:59:00 2010] [warn] Init: You should not use name-based virtual hosts in conjunction with SSL!! [Thu Mar 25 15:59:00 2010] [notice] Apache/2.2.3 (CentOS) configured -- resuming normal operations[/PHP] Any help would be great! Thanks, 4dplane

    Read the article

  • FireBug didn't working.

    - by Nano HE
    Hi, I created a index.php file located in http://localhost/home/index.php. The file filled with the code below. <?php # //include the file require_once("FirePHP.class.php"); # //create the object $firephp = FirePHP::getInstance(true); # //send information $firephp->fb("Hello world!"); ?> I enabled fireBug and firePHP, BTW I download firePHPCore and copy FirePHP.class.php to http://localhost/home/FirePHP.class.php directory. To run this code I can't see the message “Hello world!” in the firebug console: I followed the tutorial http://yensdesign.com/2008/10/how-to-debug-php-code/ Any suggesion?

    Read the article

  • Lending Club Selects Oracle ERP Cloud Service

    - by Di Seghposs
    Another Oracle ERP Cloud Service customer turning to Oracle to help increase efficiencies and lower costs!! Lending Club, the leading platform for investing in and obtaining personal loans, has selected Oracle Fusion Financials to help improve decision-making and workflow, implement robust reporting, and take advantage of the scalability and cost savings provided by the cloud. After an extensive search, Lending Club selected Oracle due to the breadth and depth of capabilities and ongoing innovation of Oracle ERP Cloud Service. Since their online lending platform is internally developed, they chose Oracle Fusion Financials in the cloud to easily integrate with current systems, keep IT resources focused on the organization’s own platform, and reap the benefits of lowered costs in the cloud. The automation, communication and collaboration features in Oracle ERP Cloud Service will help Lending Club achieve their efficiency goals through better workflow, as well as provide greater control over financial data. Lending Club is also implementing robust analytics and reporting to improve decision-making through embedded business intelligence. “Oracle Fusion Financials is clearly the industry leader, setting an entirely new level of insight and efficiencies for Lending Club,” said Carrie Dolan, CFO, Lending Club. “We are not only incredibly impressed with the best-of-breed capabilities and business value from our adoption of Oracle Fusion Financials, but also the commitment from Oracle to its partners, customers, and the ongoing promise of innovation to come.” Resources: Oracle ERP Cloud Service Video Oracle ERP Cloud Service Executive Strategy Brief Oracle Fusion Financials Quick Tour of Oracle Fusion Financials If you haven't heard about Oracle ERP Cloud Service, check it out today!

    Read the article

  • EM12c Release 4: Database as a Service Enhancements

    - by Adeesh Fulay
    Oracle Enterprise Manager 12.1.0.4 (or simply put EM12c R4) is the latest update to the product. As previous versions, this release provides tons of enhancements and bug fixes, attributing to improved stability and quality. One of the areas that is most exciting and has seen tremendous growth in the last few years is that of Database as a Service. EM12c R4 provides a significant update to Database as a Service. The key themes are: Comprehensive Database Service Catalog (includes single instance, RAC, and Data Guard) Additional Storage Options for Snap Clone (includes support for Database feature CloneDB) Improved Rapid Start Kits Extensible Metering and Chargeback Miscellaneous Enhancements 1. Comprehensive Database Service Catalog Before we get deep into implementation of a service catalog, lets first understand what it is and what benefits it provides. Per ITIL, a service catalog is an exhaustive list of IT services that an organization provides or offers to its employees or customers. Service catalogs have been widely popular in the space of cloud computing, primarily as the medium to provide standardized and pre-approved service definitions. There is already some good collateral out there that talks about Oracle database service catalogs. The two whitepapers i recommend reading are: Service Catalogs: Defining Standardized Database Service High Availability Best Practices for Database Consolidation: The Foundation for Database as a Service [Oracle MAA] EM12c comes with an out-of-the-box service catalog and self service portal since release 1. For the customers, it provides the following benefits: Present a collection of standardized database service definitions, Define standardized pools of hardware and software for provisioning, Role based access to cater to different class of users, Automated procedures to provision the predefined database definitions, Setup chargeback plans based on service tiers and database configuration sizes, etc Starting Release 4, the scope of services offered via the service catalog has been expanded to include databases with varying levels of availability - Single Instance (SI) or Real Application Clusters (RAC) databases with multiple data guard based standby databases. Some salient points of the data guard integration: Standby pools can now be defined across different datacenters or within the same datacenter as the primary (this helps in modelling the concept of near and far DR sites) The standby databases can be single instance, RAC, or RAC One Node databases Multiple standby databases can be provisioned, where the maximum limit is determined by the version of database software The standby databases can be in either mount or read only (requires active data guard option) mode All database versions 10g to 12c supported (as certified with EM 12c) All 3 protection modes can be used - Maximum availability, performance, security Log apply can be set to sync or async along with the required apply lag The different service levels or service tiers are popularly represented using metals - Platinum, Gold, Silver, Bronze, and so on. The Oracle MAA whitepaper (referenced above) calls out the various service tiers as defined by Oracle's best practices, but customers can choose any logical combinations from the table below:  Primary  Standby [1 or more]  EM 12cR4  SI  -  SI  SI  RAC -  RAC SI  RAC RAC  RON -  RON RON where RON = RAC One Node is supported via custom post-scripts in the service template A sample service catalog would look like the image below. Here we have defined 4 service levels, which have been deployed across 2 data centers, and have 3 standardized sizes. Again, it is important to note that this is just an example to get the creative juices flowing. I imagine each customer would come up with their own catalog based on the application requirements, their RTO/RPO goals, and the product licenses they own. In the screenwatch titled 'Build Service Catalog using EM12c DBaaS', I walk through the complete steps required to setup this sample service catalog in EM12c. 2. Additional Storage Options for Snap Clone In my previous blog posts, i have described the snap clone feature in detail. Essentially, it provides a storage agnostic, self service, rapid, and space efficient approach to solving your data cloning problems. The net benefit is that you get incredible amounts of storage savings (on average 90%) all while cloning databases in a matter of minutes. Space and Time, two things enterprises would love to save on. This feature has been designed with the goal of providing data cloning capabilities while protecting your existing investments in server, storage, and software. With this in mind, we have pursued with the dual solution approach of Hardware and Software. In the hardware approach, we connect directly to your storage appliances and perform all low level actions required to rapidly clone your databases. While in the software approach, we use an intermediate software layer to talk to any storage vendor or any storage configuration to perform the same low level actions. Thus delivering the benefits of database thin cloning, without requiring you to drastically changing the infrastructure or IT's operating style. In release 4, we expand the scope of options supported by snap clone with the addition of database CloneDB. While CloneDB is not a new feature, it was first introduced in 11.2.0.2 patchset, it has over the years become more stable and mature. CloneDB leverages a combination of Direct NFS (or dNFS) feature of the database, RMAN image copies, sparse files, and copy-on-write technology to create thin clones of databases from existing backups in a matter of minutes. It essentially has all the traits that we want to present to our customers via the snap clone feature. For more information on cloneDB, i highly recommend reading the following sources: Blog by Tim Hall: Direct NFS (DNFS) CloneDB in Oracle Database 11g Release 2 Oracle OpenWorld Presentation by Cern: Efficient Database Cloning using Direct NFS and CloneDB The advantages of the new CloneDB integration with EM12c Snap Clone are: Space and time savings Ease of setup - no additional software is required other than the Oracle database binary Works on all platforms Reduce the dependence on storage administrators Cloning process fully orchestrated by EM12c, and delivered to developers/DBAs/QA Testers via the self service portal Uses dNFS to delivers better performance, availability, and scalability over kernel NFS Complete lifecycle of the clones managed by EM12c - performance, configuration, etc 3. Improved Rapid Start Kits DBaaS deployments tend to be complex and its setup requires a series of steps. These steps are typically performed across different users and different UIs. The Rapid Start Kit provides a single command solution to setup Database as a Service (DBaaS) and Pluggable Database as a Service (PDBaaS). One command creates all the Cloud artifacts like Roles, Administrators, Credentials, Database Profiles, PaaS Infrastructure Zone, Database Pools and Service Templates. Once the Rapid Start Kit has been successfully executed, requests can be made to provision databases and PDBs from the self service portal. Rapid start kit can create complex topologies involving multiple zones, pools and service templates. It also supports standby databases and use of RMAN image backups. The Rapid Start Kit in reality is a simple emcli script which takes a bunch of xml files as input and executes the complete automation in a matter of seconds. On a full rack Exadata, it took only 40 seconds to setup PDBaaS end-to-end. This kit works for both Oracle's engineered systems like Exadata, SuperCluster, etc and also on commodity hardware. One can draw parallel to the Exadata One Command script, which again takes a bunch of inputs from the administrators and then runs a simple script that configures everything from network to provisioning the DB software. Steps to use the kit: The kit can be found under the SSA plug-in directory on the OMS: EM_BASE/oracle/MW/plugins/oracle.sysman.ssa.oms.plugin_12.1.0.8.0/dbaas/setup It can be run from this default location or from any server which has emcli client installed For most scenarios, you would use the script dbaas/setup/database_cloud_setup.py For Exadata, special integration is provided to reduce the number of inputs even further. The script to use for this scenario would be dbaas/setup/exadata_cloud_setup.py The database_cloud_setup.py script takes two inputs: Cloud boundary xml: This file defines the cloud topology in terms of the zones and pools along with host names, oracle home locations or container database names that would be used as infrastructure for provisioning database services. This file is optional in case of Exadata, as the boundary is well know via the Exadata system target available in EM. Input xml: This file captures inputs for users, roles, profiles, service templates, etc. Essentially, all inputs required to define the DB services and other settings of the self service portal. Once all the xml files have been prepared, invoke the script as follows for PDBaaS: emcli @database_cloud_setup.py -pdbaas -cloud_boundary=/tmp/my_boundary.xml -cloud_input=/tmp/pdb_inputs.xml          The script will prompt for passwords a few times for key users like sysman, cloud admin, SSA admin, etc. Once complete, you can simply log into EM as the self service user and request for databases from the portal. More information available in the Rapid Start Kit chapter in Cloud Administration Guide.  4. Extensible Metering and Chargeback  Last but not the least, Metering and Chargeback in release 4 has been made extensible in all possible regards. The new extensibility features allow customer, partners, system integrators, etc to : Extend chargeback to any target type managed in EM Promote any metric in EM as a chargeback entity Extend list of charge items via metric or configuration extensions Model abstract entities like no. of backup requests, job executions, support requests, etc  A slew of emcli verbs have also been added that allows administrators to create, edit, delete, import/export charge plans, and assign cost centers all via the command line. More information available in the Chargeback API chapter in Cloud Administration Guide. 5. Miscellaneous Enhancements There are other miscellaneous, yet important, enhancements that are worth a mention. These mostly have been asked by customers like you. These are: Custom naming of DB Services Self service users can provide custom names for DB SID, DB service, schemas, and tablespaces Every custom name is validated for uniqueness in EM 'Create like' of Service Templates Now creating variants of a service template is only a click away. This would be vital when you publish service templates to represent different database sizes or service levels. Profile viewer View the details of a profile like datafile, control files, snapshot ids, export/import files, etc prior to its selection in the service template Cleanup automation - for failed and successful requests Single emcli command to cleanup all remnant artifacts of a failed request Cleanup can be performed on a per request bases or by the entire pool As an extension, you can also delete successful requests Improved delete user workflow Allows administrators to reassign cloud resources to another user or delete all of them Support for multiple tablespaces for schema as a service In addition to multiple schemas, user can also specify multiple tablespaces per request I hope this was a good introduction to the new Database as a Service enhancements in EM12c R4. I encourage you to explore many of these new and existing features and give us feedback. Good luck! References: Cloud Management Page on OTN Cloud Administration Guide [Documentation] -- Adeesh Fulay (@adeeshf)

    Read the article

  • How do I generate coverage xml report for a single package?

    - by Wraith
    I'm using nose and coverage to generate coverage reports. I only have one package right now, ae, so I specify to only cover that: nosetests -w tests/unit --with-xunit --with-coverage --cover-package=ae And here are the results, which look good: Name Stmts Exec Cover Missing ---------------------------------------------- ae 1 1 100% ae.util 253 224 88% 39, 63-65, 284, 287, 362, 406 ---------------------------------------------- TOTAL 263 234 88% ---------------------------------------------------------------------- Ran 68 tests in 5.292s However when I run coverage xml, coverage pulls in more packages than necessary, including python email and logging packages which have nothing to do with my code. If I run coverage xml ae, I get this error: No source for code: '/home/wraith/dev/projects/trimurti/src/ae': [Errno 21] Is a directory: '/home/wraith/dev/projects/trimurti/src/ae' Is there a way to generate the XML for just the ae package?

    Read the article

< Previous Page | 218 219 220 221 222 223 224 225 226 227 228 229  | Next Page >