Search Results

Search found 27568 results on 1103 pages for 'git post receive'.

Page 204/1103 | < Previous Page | 200 201 202 203 204 205 206 207 208 209 210 211  | Next Page >

  • how to get the content of iframe in a php variable? [closed]

    - by Sahil
    My code is somewhat like this: <?php if($_REQUEST['post']) { $title=$_REQUEST['title']; $body=$_REQUEST['body']; echo $title.$body; } ?> <script type="text/javascript" src="texteditor.js"> </script> <form action="" method="post"> Title: <input type="text" name="title"/><br> <a id="bold" class="font-bold"> B </a> <a id="italic" class="italic"> I </a> Post: <iframe id="textEditor" name="body"></iframe> <input type="submit" name="post" value="Post" /> </form> the texteditor.js file code is: $(document).ready(function(){ document.getElementById('textEditor').contentWindow.document.designMode="on"; document.getElementById('textEditor').contentWindow.document.close(); $("#bold").click(function(){ if($(this).hasClass("selected")) { $(this).removeClass("selected"); }else { $(this).addClass("selected"); } boldIt(); }); $("#italic").click(function(){ if($(this).hasClass("selected")) { $(this).removeClass("selected"); }else { $(this).addClass("selected"); } ItalicIt(); }); }); function boldIt(){ var edit = document.getElementById("textEditor").contentWindow; edit.focus(); edit.document.execCommand("bold", false, ""); edit.focus(); } function ItalicIt(){ var edit = document.getElementById("textEditor").contentWindow; edit.focus(); edit.document.execCommand("italic", false, ""); edit.focus(); } function post(){ var iframe = document.getElementById("body").contentWindow; } actualy i want to fetch data from this texteditor (which is created using iframe and javascript) and store it in some other place. i'm not able to fetch the content that is entered in the editor (i.e. iframe here). please help me out of this....

    Read the article

  • File upload with tdo-miniforms not working

    - by user338109
    I am using the TDO-miniforms plugin for wordpress. I have a form set up that lets the user submit files. The files are successfully uploaded to the tmp-folder, but once the post is created they are not copied into the allocated post folder. The only thing that is created in the folder with a post id is a file called array. It seems like the file holds the binary data of the uploaded files. The “correct” urls are appended to the post content I am running wordpress 2.9.2 on snow leopard using Mamp.

    Read the article

  • How to redirect with .htaccess (keeping legacy links)

    - by Laurent
    Hello, I recently switched CMSes. While using Wordpress, I had this permalink convention: "/year/post". Now, I'd like to have "year/month/post". To keep legacy links, I need to redirect from "http://site.com/2009/sample-post" to "http://site.com/2009/01/sample-post". "01" should be permanent in this case. This is what I've got atm: RewriteEngine on RewriteCond $1 !^(images|system|themes|_|wp-content|mint|assets|favicon\.ico|robots\.txt|index\.php) [NC] RewriteRule ^(.*)$ /index.php?/$1 [L] Thanks in advance!

    Read the article

  • Will the error be displayed?

    - by user281180
    I have an ajax post and in the controller I return nothing. In case there is a failure will the error message displayed with the follwoing code? [AcceptVerbs(HttpVerbs.Post)] public void Edit(Model model) { model.Save(); } $.ajax({ type: "POST", url: '<%=Url.Action("Edit","test") %>', data: JSON.stringify(data), contentType: "application/json; charset=utf-8", dataType: "html", success: function() { }, error: function(request, status, error) { alert("Error: " & request.responseText); } });

    Read the article

  • Ajax posting to PHP

    - by JQonfused
    Hi guys, I'm testing a jQuery ajax post method on a local Apache 2.2 server with PHP 5.3 (totally new at this). Here are the files, all in the same folder. html body (jQuery library included in head): <form id="postForm" method="post"> <label for="name">Input Name</label> <input type="text" name="name" id="name" /><br /> <label for="age">Input Age</label> <input type="text" name="age" id="age" /><br /> <input type="submit" value="Submit" id="submitBtn" /> </form> <div id="resultDisplay"></div> <script src="queryRequest.js"></script> queryRequest.js $(document).ready(function(){ $('#s').focus(); $('#postForm').submit(function(){ var name = $('#name').val(); var age = $('#age').val(); var URL = "post.php"; $.ajax({ type:'POST', url: URL, datatype:'json', data:{'name': name ,'age': age}, success: function(data){ $('#resultDisplay').append("Value returned.<br />name: "+data.name+" age: "+data.age); }, error: function() { $('resultDisplay').append("ERROR!") } }); }); }); post.php <?php $name = $_POST['name']; $age = $_POST['age']; $return = array('name' => $name, 'age' => $age); echo json_encode($return); ?> After inputting the two fields and pressing 'Submit', the success method is called, text appended, but the values returned from ajax post are undefined. And then after less than a second, the text fields are emptied, and the text appended to the div is gone. Doesn't seem like it's a page refresh, though, since there's no empty page flash. What's going on here? I'm sure it's a silly mistake but Firebug isn't telling me anything.

    Read the article

  • IO::Pipe - close(<handle>) does not set $?

    - by danboo
    My understanding is that closing the handle for an IO::Pipe object should be done with the method ($fh->close) and not the built-in (close($fh)). The other day I goofed and used the built-in out of habit on a IO::Pipe object that was opened to a command that I expected to fail. I was surprised when $? was zero, and my error checking wasn't triggered. I realized my mistake. If I use the built-in, IO:Pipe can't perform the waitpid() and can't set $?. But what I was surprised by was that perl seemed to still close the pipe without setting $? via the core. I worked up a little test script to show what I mean: use 5.012; use warnings; use IO::Pipe; say 'init pipes:'; pipes(); my $fh = IO::Pipe->reader(q(false)); say 'post open pipes:'; pipes(); say 'return: ' . $fh->close; #say 'return: ' . close($fh); say 'status: ' . $?; say q(); say 'post close pipes:'; pipes(); sub pipes { for my $fd ( glob("/proc/self/fd/*") ) { say readlink($fd) if -p $fd; } say q(); } When using the method it shows the pipe being gone after the close and $? is set as I expected: init pipes: post open pipes: pipe:[992006] return: 1 status: 256 post close pipes: And, when using the built-in it also appears to close the pipe, but does not set $?: init pipes: post open pipes: pipe:[952618] return: 1 status: 0 post close pipes: It seems odd to me that the built-in results in the pipe closure, but doesn't set $?. Can anyone help explain the discrepancy? Thanks!

    Read the article

  • jquery serialize()+custom?

    - by tazphoenix
    When using $.POST and $.GET in jquery is there any way to add custom vars to the URL and send them too? i tried the following : $.ajax({type:"POST", url:"file.php?CustomVar=data", data:$("#form").serialize()}); And : <input name="CustomVar" type="hidden" value="data" /> $.ajax({type:"POST", url:"file.php", data:$("#form").serialize()}); The first ones problem is that it send the custom as get but i want to receive it as post. The second one well i'm using it right now but there is not any better way?

    Read the article

  • Windows 7 inbuilt and 3rd party (de)fragmentation related queries

    - by Karan
    I have a pretty good idea of how files end up getting fragmented. That said, I just copied ~3,200 files of varying sizes (from a few KB to ~20GB) from an external USB HDD to an internal, freshly formatted (under Windows 7 x64), NTFS, 2TB, 5400RPM, WD, SATA, non-system (i.e. secondary) drive, filling it up 57%. Since it should have been very much possible for each file to have been stored in one contiguous block, I expected the drive to be fragmented not more than 1-2% at most after this rather lengthy exercise (unfortunately this older machine doesn't support USB 3.0). Windows 7's inbuilt defrag utility told me after a quick analysis that the drive was fragmented only 1% or so, which dovetailed neatly with my expectations. However, just out of curiosity I downloaded and ran the latest portable x64 version of Piriform's Defraggler, and was shocked to see the drive being reported as being ~85% fragmented! The portable version of Auslogics Disk Defrag also agreed with Defraggler, and both clearly expected to grind away for ~10 hours to completely defragment the drive. 1) How in blazes could the inbuilt and 3rd party defrag utils disagree so badly? I mean, 10-20% variance is probably understandable, but 1% and 85% are miles apart! This Engineering Windows 7 blog post states: In Windows XP, any file that is split into more than one piece is considered fragmented. Not so in Windows Vista if the fragments are large enough – the defragmentation algorithm was changed (from Windows XP) to ignore pieces of a file that are larger than 64MB. As a result, defrag in XP and defrag in Vista will report different amounts of fragmentation on a volume. ... [Please read the entire post so the quote is not taken out of context.] Could it simply be that the 3rd party defrag utils ignore this post-XP change and continue to use analysis algos similar to those XP used? 2) Assuming that the 3rd party utils aren't lying about the real extent of fragmentation (which Windows is downplaying post-XP), how could the files have even got fragmented so badly given they were just copied over afresh to an empty drive? 3) If vastly differing analysis algos explain the yawning gap, which do I believe? I'm no defrag fanatic for sure, but 85% is enough to make me seriously consider spending 10 hours defragging this drive. On the other hand, 1% reported by Windows' own defragger clearly implies that there is no cause for concern and defragging would actually have negative consequences (as per the post). Is Windows' assumption valid and should I just let it be, or will there be any noticeable performance gains after running one of the 3rd party utils for 10 hours straight? 4) I see that out of the box Windows 7 defrag is scheduled to run weekly. Does anyone know whether it defrags every single time, or only if its analysis reveals a fragmentation percentage over a set threshold? If the latter, what is this threshold and can it be changed, maybe via a Registry edit? Thanks for reading through (my first query on this wonderful site!) and for any helpful replies. Also, if you're answering question #3, please keep in mind that any speed increases post defragging with 3rd party utils vis-à-vis Windows' inbuilt program should not include pre-Vista (preferably pre-Win7) examples. Further, examples of programs that made your system boot faster won't help in this case, since this is a non-system drive (although one that'll still be used daily).

    Read the article

  • How can I work efficiently on a desktop sharing workflow?

    - by OSdave
    I am a freelance Magento developer, based in Spain. One of my clients is a Germany based web development company and they're asking me something I think it's impossible. OK, maybe not impossible but definitely not a preferred way of doing things. One of their clients has a Magento Entreprise installation, which is the paid (and I think proprietary) version of Magento. Their client has forbidden them to download the files from his server. My client is asking me now to study one particular module of the application in order to interact with it from a custom module I'll have to develop. As they have a read-only ssh access to their client's server, they came up with this solution: Set up a desktop/screen sharing session between one of their developer's station and mine, alongsides with a skype conversation. Their idea is that I'll say to the developer: show me file foo.php The developer will then open this foo.php file in his IDE. I'll have then to ask him to show me the bar method, the parent class, etc... Remember that it's a read-only session, so forget about putting a Zend_Debug::log() anywhere, and don't even think about a xDebug breakpoint (they don't use any kind of debugger, sic). Their client has also forbidden them to use any version control system... My first reaction when they explained to me this was (and I actually did say it outloud to them): Well, find another client. but they took it as a joke from me. I understand that in a business point of view rejecting a client is not a good practice, but I think that the condition of this assignment make it impossible to complete. At least according to my workflow. I mean, the way I work or learn a new framework/program is: download all files and copy of db on my pc create a git repository and a branch run the application locally use breakpoints use Zend_Debug::log() write the code and tests commit to git repo upload to (test/staging first if there is one, production if not) server I have agreed to try the desktop sharing session, although I think it will be a waste of time. On one hand I don't mind, they pay me for that time, but I know me and I don't like the sensation of loosing my time. On the other hand, I have other clients for whom I can work according to my workflow. I am about to say to them that I cannot (don't want to) do it. Well, I'll first try this desktop sharing session: maybe I'm wrong and it can actually work. But I like to consider myself as a professional, and I know that I don't know everything. So I try to keep an open mind and I am always willing to learn new stuff. So my questions are: Can this desktop-sharing workflow work? What should be done in order to take the most of it? Taking into account all the obstacles (geographic locations, no local, no git), is there another way for me to work on that project?

    Read the article

  • Windows NT from vmware to kvm

    - by Luca Rossi
    I'm trying to convert a couple of old Windows NT virtual servers from vmware to KVM. I tried almost all guidelines and how to I found around the web but with no luck. I have the vmware virtual disk: Dlc1.vmdk partitioned image. I converted the vmdk into qcow2 image with the qemu utility and I tried to use it with kvm: kvm -hda test.qemu -vnc :1 -m 750 but I receive "error loading operating system" I also tried with raw partitions I can mount through losetup and kpartx. but nothing changed I also tried to create an brand new image file with: qemu-img create -f qcow2 test.qcow2 2G I partitioned the new image file and I copied the original partition 1 to the new partition 1 with dd: dd if=/dev/mapper/loop1p1 of=/dev/mapper/loop0p1 bs=128M no luck again I also tried with a single unpartitioned file: qemu-img create -f qcow2 test.qcow2 2G and I copied the partition 1 to the new image file: dd if=/dev/mapper/loop0p1 of=test.img bs=128M but when booting, I receive a black screen and the virtual machine hangs. The bootloader is loaded successfully, because I also tried with a GRUB live iso and I receive the same screens and errors. Note that grub sees the Windows setup and give me the boot choice. I have the suspect the problem is that the vmware machine is probably a scsi guest and in centos 6 (my system) scsi emulation is no longer supported. But in that case, where to change in Windows? I'm not so skilled with MS systems. Thank you for the help Luca Rossi

    Read the article

  • puppetca never returns anything

    - by mrisher
    Hi: I'm trying to configure Puppet on Ubuntu, and strangely I am never able to generate a certificate because my server never shows any pending certificate requests. Put differently, on the server I am running puppetmasterd and on the client I am able to connect to the server, but the client continues printing notice: Did not receive certificate warning: peer certificate won't be verified in this SSL session and yet the server never sees the request mrisher@lab2$ puppetca --list [nothing shows up] mrisher@lab2$ puppetca --sign clientname.domain.com clientname.domain.com err: Could not call sign: Could not find certificate request for clientname.domain.com Edit: There was a suggestion that autosign was happening, but that does not seem to be it. There is no autosign.conf file, and when I run puppetmasterd --no-daemonize -d -v I receive the following output: info: Could not find certificate for 'clientname.domain.com' every time the client says notice: Did not receive certificate I checked the certs on the server and there don't seem to be any: mrisher@lab2:~$ puppetca --list --all mrisher@lab2:~$ sudo puppetca --list --all + lab2.domain.com // this is the server (master) mrisher@lab2:~$ sudo puppetca --list [blank line] mrisher@lab2:~$ Note: This is mostly running the default install from Ubuntu, if that gives any leads. Thanks for any help out there.

    Read the article

  • Exim and receiving email with large recipient lists

    - by AceJordin
    I have Exim4 running on Debian configured to receive mail on multiple domains. Exim is set to forward all email that is received to one of the domains to another box. This box is configured with a catchall mailbox that everything goes in. My issue is that when an email is sent to the domain, which contains a large amount of addresses (all to the same domain, but different users), Exim will receive the single email over multiple connections. This means that the catchall mailbox receives multiple copies of the single email all containing the full recipient list. For example, I was able to reproduce it by sending an email from my gmail account that contained 500 recipients (eg [email protected]; [email protected]; [email protected]; etc. for a total of 500). Exim received the message as 20 messages (25 recipients per; appears to be a gmail server setting). So the catchall mailbox received 20 messages, each containing all 500 addresses. I'm pretty sure I understand why this is happening but is there any way I can configure Exim to only receive it once, or to combine it into one? Is there anything that can be done on my end, or am I at the mercy of the sending email server? This is causing havoc with a process that polls the catchall mailbox and parses each recipient in each email.

    Read the article

  • BizTalk configuration broken following WCF hotfix installation

    - by Sir Crispalot
    I usually post over on StackOverflow, but thought this was probably better suited to ServerFault. Please migrate if I'm wrong! I am developing a WCF service and a BizTalk application on my workstation at the moment. As part of the WCF service, I had to install hotfix 971493 from Microsoft which updates some core WCF assemblies. Following installation of that hotfix, I am now experiencing severe issues in my existing BizTalk application. When I attempt to configure the properties of an existing WCF-Custom receive location, I get this error: Error loading properties (System.IO.FileLoadException) The located assembly's manifest definition does not match the assembly reference. (Exception from HRESULT: 0x80131040) If I click OK (the same error repeats four times) I eventually see the WCF-Custom properties dialog. However if I click on the various tabs, I continue to receive errors: The located assembly's manifest definition does not match the assembly reference. (Exception from HRESULT: 0x80131040) (Microsoft.BizTalk.Adapter.Wcf.Admin) The WCF-Custom receive location was working yesterday, and I installed the hotfix this morning. I'm guessing these two are related, and that BizTalk somehow has a reference to the old WCF assemblies. Does anyone know how I can fix this?

    Read the article

  • Exchange Online SMTP Not Working With Any Email Client

    - by emre nevayeshirazi
    I am trying to switch our company mail server to exchange online. I have successfully added my domain and users and can send and receive mails through Outlook Web App. I can also send and receive if I configure my Outlook 2013 client using Exchange protocol. However, some folks in company are using Thunderbird and some old Outlook Clients. For those, I tried to connect to Exchange via IMAP/SMTP. This is what I use, For incoming, IMAP / Port : 993 with SSL / Host : outlook.office365.com For outgoing, SMTP / Port : 589 with TSL / Host : smtp.office365.com I can receive emails, however I could not be able to send emails. I keep getting An error occurred while sending mail. The mail server responded: 4.3.2 Service not active. Please verify that your email address is correct in your Mail preferences and try again. My username and password are correct, I am using my mail address as my username to mailbox. I also tried sending mail via C# application which was working for outlook.com and gmail.com SMTP settings. It also fails to send emails and returns the same error code. I thought TB and other old clients such as Office 2003 might not support Exc. Online so I tried same settings in Office 2013. It successfully connected my mailbox when checking for configuration but failed in sending test message and returned the same error code. Configuration for incoming and outgoing mailbox are taken from here. They are also available on Office 365 user page and they are same. What could be the reason for error ?

    Read the article

  • Exchange 2007 relay from sendmail, message "Undelivered". Possible reasons?

    - by garlicman
    Note: This is my re-post from Stackoverflow. I've been messing with a test environment for security purposes where a DMZ RHEL5 sendmail server is used as a relay for an Exchange 2007 server. Exchange is working in the environment, I have Vista and XP VMs using Outlook on the Domain to send e-mail to each other. I've been trying to simulate an external internet VM sending an e-mail to the DMZ sendmail relay, which forwards to the Exchange server. Before everyone thinks this is too big a problem/question, I've followed the sendmail/Exchange guides and all I want to know is how I can determine why a relayed message/e-mail in Exchange is "Undelivered". Basically I send a SMTP message to the sendmail server, which relayed to my Exchange. The /var/log/maillog shows the e-mail being relayed to Exchange. Nov 17 13:41:22 externalmailserver sendmail[9017]: pAHIfMuW009017: from=<[email protected]>, size=1233, class=0, nrcpts=1, msgid=<[email protected]>, proto=ESMTP, daemon=MTA, relay=[10.50.50.1] Nov 17 13:42:17 externalmailserver sendmail[9050]: pAHIfMuW009017: to=<[email protected]>, delay=00:00:55, xdelay=00:00:36, mailer=relay, pri=121233, relay=mailserver.xyz.local. [192.168.1.20], dsn=2.0.0, stat=Sent (<[email protected]> Queued mail for delivery) This is good, but the To never receives the e-mail from Exchange. So I started poking around Exchange. In the "Message Tracking" Troubleshooting Assistant I queried the processed messages and found this: (I had to copy and paste the cells... sorry for the format) 2011/11/17 RECEIVE SMTP <[email protected]> "Undelivered Mail Returned to Sender" [email protected] [email protected] 192.168.100.10 MAILSERVER\DMZ Relay [email protected] I just want to know if anyone has any suggestions on why the DMZ Relay Connector I setup isn't relaying and is instead returning the forwarded e-mail to sender as Undelivered? My Exchange Relay Receive Connector is pretty simple. The Exchange server's FQDN is set as the HELO response, all available IP addresses can receive relayed e-mail, and the IP address of my sendmail server is specifically set as a remote server.

    Read the article

  • ISA 2000 and COD MW2 Steam

    - by twlichty
    OK, so maybe not the "proper use" of network resources, but we enjoy the odd COD game during lunch hours. When we played COD4, we had a dedicated server setup at the back of the server room. With MW2, we need to be able to connect to steam to be able to play multi-player. I've found this support article here: https://support.steampowered.com/kb%5Farticle.php?ref=8571-GLVN-8711 Which outlines all the ports I need to open. I went through and created the following rules in ISA 2000 (I'm stuck with 2000 for now). Protocol Definition: Steam Primary connection: Port 27000, UDP, Send Receive Secondary Connection: Port range 27001-27030 Send Receive Protocol Definition: Steam TCP In Primary connection: 27014, TCP, Inbound Secondary Connection: Port range: 27015-27050, Inbound Protocol Definition: Steam 4380 Primary connection: 4380, UDP, Send Receive When I start steam on my local workstation (I did add an exception to the Vista Firewall to allow steam), the steam client sits on "Updating Steam" for 5 minutes then errors out with: You must connect to the internet first. Any ideas? I assume I missed something. Thanks for your help.

    Read the article

  • Window 7 Host does not answer to ping

    - by gencha
    Today I tried printing on a shared printer on one of our homegroup members. Sadly it did not work (printer marked as offline). Shortly after, I noticed I can't even ping the machine that owns the printer (I also can not remotely access it in any other way I've tried). Currently I'm trying to ping the machine from the router both computers are connected to (and my machine in question doesn't answer). I do receive the echo requests (as verified with WireShark). I also added a rule in the Windows Firewall to specifically allow ICMP echo requests, but that didn't change anything. I also tried netsh firewall set icmpsetting 8 enable, but that didn't change anything either. Completely disabling the Windows Firewall has no effect on the issue either. One has to wonder, where does Windows log when and why it ignored any incoming packets? How can I get to the bottom of this? Here are some ways I found to dig deeper into the issue: Enabling logging on the Windows Firewall Enabling Windows Filtering Platform Auditing Both methods at least give more insight into the issue. The plain log file is full of entries like this: 2011-11-11 14:35:27 DROP ICMP 192.168.133.1 192.168.133.128 - - 84 - - - - 8 0 - RECEIVE So the ICMP packets are being dropped as if that was intended. The Event Viewer now gives a little bit more details: The Windows Filtering Platform has blocked a packet. Application Information: Process ID: 4 Application Name: System Network Information: Direction: Inbound Source Address: 192.168.133.1 Source Port: 0 Destination Address: 192.168.133.128 Destination Port: 8 Protocol: 1 Filter Information: Filter Run-Time ID: 214517 Layer Name: Receive/Accept Layer Run-Time ID: 44 This same entry is always repeated with 2 points of information changing: Process ID: 420 Application Name: \device\harddiskvolume2\windows\system32\svchost.exe The service host with the PID 420 is the host for the following services: Windows Audio DHCP Client Windows Event Log HomeGroup Provider TCP/IP NetBIOS Helper Security Center Additionally, there is currently this problem with the same machine: Even though my network is set to be a "Home network", I am unable to create a new homegroup.

    Read the article

  • How do I install LFE on Ubuntu Karmic?

    - by karlthorwald
    Erlang was already installed: $dpkg -l|grep erlang ii erlang 1:13.b.3-dfsg-2ubuntu2 Concurrent, real-time, distributed function ii erlang-appmon 1:13.b.3-dfsg-2ubuntu2 Erlang/OTP application monitor ii erlang-asn1 1:13.b.3-dfsg-2ubuntu2 Erlang/OTP modules for ASN.1 support ii erlang-base 1:13.b.3-dfsg-2ubuntu2 Erlang/OTP virtual machine and base applica ii erlang-common-test 1:13.b.3-dfsg-2ubuntu2 Erlang/OTP application for automated testin ii erlang-debugger 1:13.b.3-dfsg-2ubuntu2 Erlang/OTP application for debugging and te ii erlang-dev 1:13.b.3-dfsg-2ubuntu2 Erlang/OTP development libraries and header [... many more] Erlang seems to work: $ erl Erlang R13B03 (erts-5.7.4) [source] [64-bit] [smp:2:2] [rq:2] [async-threads:0] [hipe] [kernel-poll:false] Eshell V5.7.4 (abort with ^G) 1> I downloaded lfe from github and checked out 0.5.2: git clone http://github.com/rvirding/lfe.git cd lfe git checkout -b local0.5.2 e207eb2cad $ configure configure: command not found $ make mkdir -p ebin erlc -I include -o ebin -W0 -Ddebug +debug_info src/*.erl #erl -I -pa ebin -noshell -eval -noshell -run edoc file src/leex.erl -run init stop #erl -I -pa ebin -noshell -eval -noshell -run edoc_run application "'Leex'" '"."' '[no_packages]' #mv src/*.html doc/ Must be something stupid i missed :o $ sudo make install make: *** No rule to make target `install'. Stop. $ erl -noshell -noinput -s lfe_boot start {"init terminating in do_boot",{undef,[{lfe_boot,start,[]},{init,start_it,1},{init,start_em,1}]}} Crash dump was written to: erl_crash.dump init terminating in do_boot () Is there an example how I would create a hello world source file and compile and run it?

    Read the article

  • Problem building STLport NDK r5/ Android

    - by user558299
    Hi all, I'm trying to build STLport for Android. I got the following steps, but they are not working: 1 - Clone STLport repository using: git clone git://stlport.git.sourceforge.net/gitroot/stlport/stlport 2 - Configure environment using : ./configure --target=arm-eabi --with-extra-cxxflags="-fshort-enums" --with-extra-cflags="-fshort-enums" 3 - From src directory build it using make SYSROOT"{MY NDK path}/platforms/android-5/arch-arm/" release-static But I got the following errors: In file included from ../stlport/stl/_alloc.h:45, from ../stlport/memory:29, from dll_main.cpp:41: ../stlport/stl/_new.h:45:24: error: new: No such file or directory In file included from ../stlport/stl/_limits.h:36, from ../stlport/limits:29, from dll_main.cpp:48: ../stlport/stl/_cwchar.h:26:30: error: cstddef: No such file or directory In file included from ../stlport/stl/_utility.h:35, from ../stlport/utility:35, from dll_main.cpp:40: ../stlport/type_traits:889: error: 'declval' was not declared in this scope ../stlport/type_traits:889: error: expected primary-expression before '>' token ../stlport/type_traits:889: error: expected primary-expression before ')' token ../stlport/type_traits:889: error: 'declval' was not declared in this scope ../stlport/type_traits:889: error: expected primary-expression before '>' token ../stlport/type_traits:889: error: expected primary-expression before ')' token ../stlport/type_traits:889: error: ISO C++ forbids declaration of 'decltype' with no type ../stlport/type_traits:889: error: ISO C++ forbids in-class initialization of non-const static member 'decltype' ../stlport/type_traits:889: error: template declaration of 'int std::tr1::detail::decltype' ../stlport/type_traits:942: error: ISO C++ forbids declaration of 'decltype' with no type ../stlport/type_traits:942: error: ISO C++ forbids in-class initialization of non-const static member 'decltype' ../stlport/type_traits:942: error: template declaration of 'int std::tr1::detail::decltype' make: *** [obj/arm-eabi-gcc/so/dll_main.o] Error 1 Is there any include dir or configuration I´m missing? Thanks, Sergio

    Read the article

  • Unknown user 'app' with capistrano

    - by trobrock
    This is my first time trying to set up capistrano to deploy a rails application. I am deploying from my local machine to my remote server that has the repo, web, app, and mysql servers all on the same machine. I am following this walk through: http://www.capify.org/index.php/From_The_Beginning I get to the command cap deploy:start Then I get this error: *** [err :: example.com] sudo: unknown user: app command finished failed: "sh -c 'cd /var/www/example/current && sudo -p '\\''sudo password: '\\'' -u app nohup script/spin'" on example.com Am I supposed to add an 'app' user, or is there a way of changing what user the command runs as? This is my deploy.rb: set :application, "example" set :repository, "[email protected]:example.git" set :user, "trobrock" set :branch, 'master' set :deploy_to, "/var/www/example" set :scm, :git # Or: `accurev`, `bzr`, `cvs`, `darcs`, `git`, `mercurial`, `perforce`, `subversion` or `none` role :web, "example.com" # Your HTTP server, Apache/etc role :app, "example.com" # This may be the same as your `Web` server role :db, "example.com", :primary => true # This is where Rails migrations will run And obviously everywhere it says example.com is my servers hostname and every it just says example is the app name.

    Read the article

< Previous Page | 200 201 202 203 204 205 206 207 208 209 210 211  | Next Page >