Search Results

Search found 1306 results on 53 pages for 'cc mf'.

Page 15/53 | < Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >

  • Why does perl crash with "*** glibc detected *** perl: munmap_chunk(): invalid pointer"?

    - by sid_com
    #!/usr/bin/env perl use warnings; use strict; use 5.012; use XML::LibXML::Reader; my $reader = XML::LibXML::Reader->new( location => 'http://www.heise.de/' ) or die $!; while ( $reader->read ) { say $reader->name; } At the end of the output from this script I get this error-messages: * glibc detected * perl: munmap_chunk(): invalid pointer: 0x0000000000b362e0 * ======= Backtrace: ========= /lib64/libc.so.6[0x7fb84952fc76] ... ======= Memory map: ======== 00400000-0053d000 r-xp 00000000 08:01 182002 /usr/local/bin/perl ... Is this due a bug? perl -V: Summary of my perl5 (revision 5 version 12 subversion 0) configuration: Platform: osname=linux, osvers=2.6.31.12-0.2-desktop, archname=x86_64-linux uname='linux linux1 2.6.31.12-0.2-desktop #1 smp preempt 2010-03-16 21:25:39 +0100 x86_64 x86_64 x86_64 gnulinux ' config_args='-Dnoextensions=ODBM_File' hint=recommended, useposix=true, d_sigaction=define useithreads=undef, usemultiplicity=undef useperlio=define, d_sfio=undef, uselargefiles=define, usesocks=undef use64bitint=define, use64bitall=define, uselongdouble=undef usemymalloc=n, bincompat5005=undef Compiler: cc='cc', ccflags ='-fno-strict-aliasing -pipe -fstack-protector -I/usr/local/include -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64', optimize='-O2', cppflags='-fno-strict-aliasing -pipe -fstack-protector -I/usr/local/include' ccversion='', gccversion='4.4.1 [gcc-4_4-branch revision 150839]', gccosandvers='' intsize=4, longsize=8, ptrsize=8, doublesize=8, byteorder=12345678 d_longlong=define, longlongsize=8, d_longdbl=define, longdblsize=16 ivtype='long', ivsize=8, nvtype='double', nvsize=8, Off_t='off_t', lseeksize=8 alignbytes=8, prototype=define Linker and Libraries: ld='cc', ldflags =' -fstack-protector -L/usr/local/lib' libpth=/usr/local/lib /lib /usr/lib /lib64 /usr/lib64 /usr/local/lib64 libs=-lnsl -ldl -lm -lcrypt -lutil -lc perllibs=-lnsl -ldl -lm -lcrypt -lutil -lc libc=/lib/libc-2.10.1.so, so=so, useshrplib=false, libperl=libperl.a gnulibc_version='2.10.1' Dynamic Linking: dlsrc=dl_dlopen.xs, dlext=so, d_dlsymun=undef, ccdlflags='-Wl,-E' cccdlflags='-fPIC', lddlflags='-shared -O2 -L/usr/local/lib -fstack-protector' Characteristics of this binary (from libperl): Compile-time options: PERL_DONT_CREATE_GVSV PERL_MALLOC_WRAP USE_64_BIT_ALL USE_64_BIT_INT USE_LARGE_FILES USE_PERLIO USE_PERL_ATOF Built under linux Compiled at Apr 15 2010 13:25:46 @INC: /usr/local/lib/perl5/site_perl/5.12.0/x86_64-linux /usr/local/lib/perl5/site_perl/5.12.0 /usr/local/lib/perl5/5.12.0/x86_64-linux /usr/local/lib/perl5/5.12.0 .

    Read the article

  • django customizing form labels

    - by Henri
    I have a problem in customizing labels in a Django form This is the form code in file contact_form.py: from django import forms class ContactForm(forms.Form): def __init__(self, subject_label="Subject", message_label="Message", email_label="Your email", cc_myself_label="Cc myself", *args, **kwargs): super(ContactForm, self).__init__(*args, **kwargs) self.fields['subject'].label = subject_label self.fields['message'].label = message_label self.fields['email'].label = email_label self.fields['cc_myself'].label = cc_myself_label subject = forms.CharField(widget=forms.TextInput(attrs={'size':'60'})) message = forms.CharField(widget=forms.Textarea(attrs={'rows':15, 'cols':80})) email = forms.EmailField(widget=forms.TextInput(attrs={'size':'60'})) cc_myself = forms.BooleanField(required=False) The view I am using this in looks like: def contact(request, product_id=None): . . . if request.method == 'POST': form = contact_form.ContactForm(request.POST) if form.is_valid(): . . else: form = contact_form.ContactForm( subject_label = "Subject", message_label = "Your Message", email_label = "Your email", cc_myself_label = "Cc myself") The strings used for initializing the labels will eventually be strings dependent on the language, i.e. English, Dutch, French etc. When I test the form the email is not sent and instead of the redirect-page the form returns with: <QueryDict: {u'cc_myself': [u'on'], u'message': [u'message body'], u'email':[u'[email protected]'], u'subject': [u'test message']}>: where the subject label was before. This is obviously a dictionary representing the form fields and their contents. When I change the file contact_form.py into: from django import forms class ContactForm(forms.Form): """ def __init__(self, subject_label="Subject", message_label="Message", email_label="Your email", cc_myself_label="Cc myself", *args, **kwargs): super(ContactForm, self).__init__(*args, **kwargs) self.fields['subject'].label = subject_label self.fields['message'].label = message_label self.fields['email'].label = email_label self.fields['cc_myself'].label = cc_myself_label """ subject = forms.CharField(widget=forms.TextInput(attrs={'size':'60'})) message = forms.CharField(widget=forms.Textarea(attrs={'rows':15, 'cols':80})) email = forms.EmailField(widget=forms.TextInput(attrs={'size':'60'})) cc_myself = forms.BooleanField(required=False) i.e. disabling the initialization then everything works. The form data is sent by email and the redirect page shows up. So obviously something the the init code isn't right. But what? I would really appreciate some help.

    Read the article

  • Practical Scheme Programming

    - by Ixmatus
    It's been a few months since I've touched Scheme and decided to implement a command line income partitioner using Scheme. My initial implementation used plain recursion over the continuation, but I figured a continuation would be more appropriate to this type of program. I would appreciate it if anyone (more skilled with Scheme than I) could take a look at this and suggest improvements. I'm that the multiple (display... lines is an ideal opportunity to use a macro as well (I just haven't gotten to macros yet). (define (ab-income) (call/cc (lambda (cc) (let ((out (display "Income: ")) (income (string->number (read-line)))) (cond ((<= income 600) (display (format "Please enter an amount greater than $600.00~n~n")) (cc (ab-income))) (else (let ((bills (* (/ 30 100) income)) (taxes (* (/ 20 100) income)) (savings (* (/ 10 100) income)) (checking (* (/ 40 100) income))) (display (format "~nDeduct for bills:---------------------- $~a~n" (real->decimal-string bills 2))) (display (format "Deduct for taxes:---------------------- $~a~n" (real->decimal-string taxes 2))) (display (format "Deduct for savings:-------------------- $~a~n" (real->decimal-string savings 2))) (display (format "Remainder for checking:---------------- $~a~n" (real->decimal-string checking 2)))))))))) Invoking (ab-income) asks for input and if anything below 600 is provided it (from my understanding) returns (ab-income) at the current-continuation. My first implementation (as I said earlier) used plain-jane recursion. It wasn't bad at all either but I figured every return call to (ab-income) if the value was below 600 kept expanding the function. (please correct me if that apprehension is incorrect!)

    Read the article

  • Python import error: Symbol not found, but the symbol <s>is</s> *is not* present in the file

    - by Autopulated
    I get this error when I try to import ssrc.spread: ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/ssrc/_spread.so, 2): Symbol not found: __ZN17ssrcspread_v1_0_67Mailbox11ZeroTimeoutE The file in question (_spread.so) includes the symbol: $ nm _spread.so | grep _ZN17ssrcspread_v1_0_67Mailbox11ZeroTimeoutE U __ZN17ssrcspread_v1_0_67Mailbox11ZeroTimeoutE U __ZN17ssrcspread_v1_0_67Mailbox11ZeroTimeoutE (twice because the file is a fat ppc/x86 binary) EDIT: okay, as James points out, the U means that the symbol is undefined but required by the object file. With some more digging I've noticed (where I should have looked first...) these linker errors during compilation: CC=g++ CXX=g++ g++-4.0 -arch ppc -arch i386 -isysroot /Developer/SDKs/MacOSX10.4u.sdk -fno-strict-aliasing -fno-common -dynamic -DNDEBUG -O3 -I../.. -I../.. -I/usr/local/include -I/Library/Frameworks/Python.framework/Versions/2.6/include/python2.6 -O2 -I/usr/local/include -std=c++98 -pipe -fno-gnu-keywords -fvisibility-inlines-hidden -o SsrcSpread.o -c SsrcSpread.cc CC=g++ CXX=g++ /bin/sh ../../libtool --tag=CXX --mode=link g++-4.0 -arch ppc -arch i386 -isysroot /Developer/SDKs/MacOSX10.4u.sdk -bundle -undefined dynamic_lookup -F/Library/Frameworks -framework Python \ -pthread -D_REENTRANT -pedantic -Wall -Wno-long-long -Winline -Woverloaded-virtual -Wold-style-cast -Wsign-promo -L../../ssrc -lssrcspread -L/usr/local/lib -ltspread-core -o _spread.so SsrcSpread.o mkdir .libs g++-4.0 -arch ppc -arch i386 -isysroot /Developer/SDKs/MacOSX10.4u.sdk -bundle -undefined dynamic_lookup -F/Library/Frameworks -framework Python -pthread -D_REENTRANT -pedantic -Wall -Wno-long-long -Winline -Woverloaded-virtual -Wold-style-cast -Wsign-promo -o _spread.so SsrcSpread.o -Wl,-bind_at_load -L/Dev/libssrcspread-1.0.6/ssrc /Dev/libssrcspread-1.0.6/ssrc/.libs/libssrcspread.a -L/usr/local/lib -ltspread-core ld: warning: in ~/Dev/libssrcspread-1.0.6/ssrc/.libs/libssrcspread.a, file was built for unsupported file format which is not the architecture being linked (ppc) ld: warning: in /Developer/SDKs/MacOSX10.4u.sdk/usr/local/lib/libtspread-core.dylib, file was built for unsupported file format which is not the architecture being linked (ppc) ld: warning: in /Dev/libssrcspread-1.0.6/ssrc/.libs/libssrcspread.a, file was built for unsupported file format which is not the architecture being linked (i386) ld: warning: in /Developer/SDKs/MacOSX10.4u.sdk/usr/local/lib/libtspread-core.dylib, file was built for unsupported file format which is not the architecture being linked (i386) I'm also not entirely sure that the 10.4 sdk is the right one for compiling python modules (but switching to 10.6 didn't seem to help).

    Read the article

  • Cruise Control.net Ms Build Task setting XML output Name

    - by Eric Brown - Cal
    We are running version 1.5.6755.1 of CruiseControl.net. Here is our block that executes a build <!-- MSBuild of Source Code --> <cb:define name="BuildOneProject-block"> <msbuild> <executable>C:\WINDOWS\Microsoft.NET\Framework\v3.5\MSBuild.exe</executable> <!-- Directory where source is --> <workingDirectory>D:\CC\$(AppName)\Source</workingDirectory> <!-- Solution file to be built--> <projectFile>D:\CC\$(AppName)\Source\$(ProjectName)\$(ProjectName).csproj</projectFile> <buildArgs>/noconsolelogger /p:SolutionName=\$(AppName) /p:SolutionDir=D:\CC\$(AppName)\Source /p:Configuration=$(ReleaseOrDebug) /v:diag</buildArgs> <targets>Build</targets> <timeout>900</timeout> <logger>C:\Program Files\CruiseControl.NET\server\ThoughtWorks.CruiseControl.MsBuild.dll</logger> </msbuild> </cb:define> When this run it generates a file with a name like.. msbuild-results-5cb1c8fa-1bba-4e97-a0b1-b2bf637308dc.xml Is there another tag on the MsBuild task that allows me to name the xml file? Is there an argument to the Logger that allows me to specify the name of the xml file?

    Read the article

  • how to get the source code as register user.

    - by nir143
    hi. i downloaded a sourcecode of a site,but i downloaded it i saw it identify my program as a guest,i search at google and figure out that i can send a cookie when i "ask" the source code. that what i have managed to do and it still dont identify me as register user: CookieContainer cj = new CookieContainer(); string all = ""; HttpWebRequest req = (HttpWebRequest)WebRequest.Create(Url); req.CookieContainer = cj; HttpWebResponse res = (HttpWebResponse)req.GetResponse(); CookieCollection cs=cj.GetCookies(req.RequestUri); CookieContainer cc = new CookieContainer(); cc.Add(cs); req.CookieContainer = cc; StreamReader read = new StreamReader(res.GetResponseStream()); all = read.ReadToEnd(); read.Close(); return all; what is wrong here? tyvm for help:) (if that help,i can have a simple details of a register user of the site).

    Read the article

  • Filling a byte array in Java

    - by Corleone
    Hey all! For part of a project I'm working on I am implementing a RTPpacket where I have to fill the header array of byte with RTP header fields. //size of the RTP header: static int HEADER_SIZE = 12; // bytes //Fields that compose the RTP header public int Version; // 2 bits public int Padding; // 1 bit public int Extension; // 1 bit public int CC; // 4 bits public int Marker; // 1 bit public int PayloadType; // 7 bits public int SequenceNumber; // 16 bits public int TimeStamp; // 32 bits public int Ssrc; // 32 bits //Bitstream of the RTP header public byte[] header = new byte[ HEADER_SIZE ]; This was my approach: /* * bits 0-1: Version * bit 2: Padding * bit 3: Extension * bits 4-7: CC */ header[0] = new Integer( (Version << 6)|(Padding << 5)|(Extension << 6)|CC ).byteValue(); /* * bit 0: Marker * bits 1-7: PayloadType */ header[1] = new Integer( (Marker << 7)|PayloadType ).byteValue(); /* SequenceNumber takes 2 bytes = 16 bits */ header[2] = new Integer( SequenceNumber >> 8 ).byteValue(); header[3] = new Integer( SequenceNumber ).byteValue(); /* TimeStamp takes 4 bytes = 32 bits */ for ( int i = 0; i < 4; i++ ) header[7-i] = new Integer( TimeStamp >> (8*i) ).byteValue(); /* Ssrc takes 4 bytes = 32 bits */ for ( int i = 0; i < 4; i++ ) header[11-i] = new Integer( Ssrc >> (8*i) ).byteValue(); Any other, maybe 'better' ways to do this?

    Read the article

  • Linux: modpost does not build anything

    - by waffleman
    I am having problems getting any kernel modules to build on my machine. Whenever I build a module, modpost always says there are zero modules: MODPOST 0 modules To troubleshoot the problem, I wrote a test module (hello.c): #include <linux/module.h> /* Needed by all modules */ #include <linux/kernel.h> /* Needed for KERN_INFO */ #include <linux/init.h> /* Needed for the macros */ static int __init hello_start(void) { printk(KERN_INFO "Loading hello module...\n"); printk(KERN_INFO "Hello world\n"); return 0; } static void __exit hello_end(void) { printk(KERN_INFO "Goodbye Mr.\n"); } module_init(hello_start); module_exit(hello_end); Here is the Makefile for the module: obj-m = hello.o KVERSION = $(shell uname -r) all: make -C /lib/modules/$(KVERSION)/build M=$(shell pwd) modules clean: make -C /lib/modules/$(KVERSION)/build M=$(shell pwd) clean When I build it on my machine, I get the following output: make -C /lib/modules/2.6.32-27-generic/build M=/home/waffleman/tmp/mod-test modules make[1]: Entering directory `/usr/src/linux-headers-2.6.32-27-generic' CC [M] /home/waffleman/tmp/mod-test/hello.o Building modules, stage 2. MODPOST 0 modules make[1]: Leaving directory `/usr/src/linux-headers-2.6.32-27-generic' When I make the module on another machine, it is successful: make -C /lib/modules/2.6.24-27-generic/build M=/home/somedude/tmp/mod-test modules make[1]: Entering directory `/usr/src/linux-headers-2.6.24-27-generic' CC [M] /home/somedude/tmp/mod-test/hello.o Building modules, stage 2. MODPOST 1 modules CC /home/somedude/tmp/mod-test/hello.mod.o LD [M] /home/somedude/tmp/mod-test/hello.ko make[1]: Leaving directory `/usr/src/linux-headers-2.6.24-27-generic' I looked for any relevant documentation about modpost, but found little. Anyone know how modpost decides what to build? Is there an environment that I am possibly missing? BTW here is what I am running: uname -a Linux waffleman-desktop 2.6.32-27-generic #49-Ubuntu SMP Wed Dec 1 23:52:12 UTC 2010 i686 GNU/Linux

    Read the article

  • Life Scope of Temporary Variable

    - by Yan Cheng CHEOK
    #include <cstdio> #include <string> void fun(const char* c) { printf("--> %s\n", c); } std::string get() { std::string str = "Hello World"; return str; } int main() { const char *cc = get().c_str(); // cc is not valid at this point. As it is pointing to // temporary string internal buffer, and the temporary string // has already been destroyed at this point. fun(cc); // But I am surprise this call will yield valid result. // It seems that the returned temporary string is valid within // scope (...) // What my understanding is, scope means {...} // Is this valid behavior guarantee by C++ standard? Or it depends // on your compiler vendor implementations? fun(get().c_str()); getchar(); } The output is : --> --> Hello World Hello, may I know the correct behavior is guarantee by C++ standard, or it depends on your compiler vendor implementations? I have tested this under VC2008 and VC6. Works fine for both.

    Read the article

  • How to avoid the following purify detected memory leak in C++?

    - by Abhijeet
    Hi, I am getting the following memory leak.Its being probably caused by std::string. how can i avoid it? PLK: 23 bytes potentially leaked at 0xeb68278 * Suppressed in /vobs/ubtssw_brrm/test/testcases/.purify [line 3] * This memory was allocated from: malloc [/vobs/ubtssw_brrm/test/test_build/linux-x86/rtlib.o] operator new(unsigned) [/vobs/MontaVista/Linux/montavista/pro/devkit/x86/586/target/usr/lib/libstdc++.so.6] operator new(unsigned) [/vobs/ubtssw_brrm/test/test_build/linux-x86/rtlib.o] std::string<char, std::char_traits<char>, std::allocator<char>>::_Rep::_S_create(unsigned, unsigned, std::allocator<char> const&) [/vobs/MontaVista/Linux/montavista/pro/devkit/ x86/586/target/usr/lib/libstdc++.so.6] std::string<char, std::char_traits<char>, std::allocator<char>>::_Rep::_M_clone(std::allocator<char> const&, unsigned) [/vobs/MontaVista/Linux/montavista/pro/devkit/x86/586/tar get/usr/lib/libstdc++.so.6] std::string<char, std::char_traits<char>, std::allocator<char>>::string<char, std::char_traits<char>, std::allocator<char>>(std::string<char, std::char_traits<char>, std::alloc ator<char>> const&) [/vobs/MontaVista/Linux/montavista/pro/devkit/x86/586/target/usr/lib/libstdc++.so.6] uec_UEDir::getEntryToUpdateAfterInsertion(rcapi_ImsiGsmMap const&, rcapi_ImsiGsmMap&, std::_Rb_tree_iterator<std::pair<std::string<char, std::char_traits<char>, std::allocator< char>> const, UEDirData >>&) [/vobs/ubtssw_brrm/uectrl/linux-x86/../src/uec_UEDir.cc:2278] uec_UEDir::addUpdate(rcapi_ImsiGsmMap const&, LocalUEDirInfo&, rcapi_ImsiGsmMap&, int, unsigned char) [/vobs/ubtssw_brrm/uectrl/linux-x86/../src/uec_UEDir.cc:282] ucx_UEDirHandler::addUpdateUEDir(rcapi_ImsiGsmMap, UEDirUpdateType, acap_PresenceEvent) [/vobs/ubtssw_brrm/ucx/linux-x86/../src/ucx_UEDirHandler.cc:374]

    Read the article

  • Template compilation error in Sun Studio 12

    - by Jagannath
    We are migrating to Sun Studio 12.1 and with the new compiler [ CC: Sun C++ 5.10 SunOS_sparc 2009/06/03 ]. I am getting compilation error while compiling a code that compiled fine with earlier version of Sun Compiler [ CC: Sun WorkShop 6 update 2 C++ 5.3 2001/05/15 ]. This is the compilation error I get. "Sample.cc": Error: Could not find a match for LoopThrough(int[2]) needed in main(). 1 Error(s) detected. * Error code 1. CODE: #include <iostream> #define PRINT_TRACE(STR) \ std::cout << __FILE__ << ":" << __LINE__ << ":" << STR << "\n"; template<size_t SZ> void LoopThrough(const int(&Item)[SZ]) { PRINT_TRACE("Specialized version"); for (size_t index = 0; index < SZ; ++index) { std::cout << Item[index] << "\n"; } } /* template<typename Type, size_t SZ> void LoopThrough(const Type(&Item)[SZ]) { PRINT_TRACE("Generic version"); } */ int main() { { int arr[] = { 1, 2 }; LoopThrough(arr); } } If I uncomment the code with Generic version, the code compiles fine and the generic version is called. I don't see this problem with MSVC 2010 with extensions disabled and the same case with ideone here. The specialized version of the function is called. Now the question is, is this a bug in Sun Compiler ? If yes, how could we file a bug report ?

    Read the article

  • Cleaning up PHP Code

    - by Michael
    Hi, I've noticed I am a very sloppy coder and do things out of the ordinary. Can you take a look at my code and give me some tips on how to code more efficiently? What can I do to improve? session_start(); /check if the token is correct/ if ($_SESSION['token'] == $_GET['custom1']){ /*connect to db*/ mysql_connect('localhost','x','x') or die(mysql_error()); mysql_select_db('x'); /*get data*/ $orderid = mysql_real_escape_string($_GET['order_id']); $amount = mysql_real_escape_string($_GET['amount']); $product = mysql_real_escape_string($_GET['product1Name']); $cc = mysql_real_escape_string($_GET['Credit_Card_Number']); $length = strlen($cc); $last = 4; $start = $length - $last; $last4 = substr($cc, $start, $last); $ipaddress = mysql_real_escape_string($_GET['ipAddress']); $accountid = $_SESSION['user_id']; $credits = mysql_real_escape_string($_GET['custom3']); /*insert history into db*/ mysql_query("INSERT into billinghistory (orderid, price, description, credits, last4, orderip, accountid) VALUES ('$orderid', '$amount', '$product', '$credits', '$last4', '$ipaddress', '$accountid')"); /*add the credits to the users account*/ mysql_query("UPDATE accounts SET credits = credits + $credits WHERE user_id = '$accountid'"); /*redirect is successful*/ header("location: index.php?x=1"); }else{ /*something messed up*/ header("location: error.php"); }

    Read the article

  • Displaying cookies as key=value for all domains?

    - by OverTheRainbow
    Hello, This question pertains to the use of the cookie-capable WebClient derived class presented in the How can I get the WebClient to use Cookies? question. I'd like to use a ListBox to... 1) display each cookie individually as "key=value" (the For Each loop displays all of them as one string), and 2) be able to display all cookies, regardless of the domain from which they came ("www.google.com", here): Imports System.IO Imports System.Net Public Class Form1 Private Sub Button1_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button1.Click Dim webClient As New CookieAwareWebClient Const URL = "http://www.google.com" Dim response As String response = webClient.DownloadString(URL) RichTextBox1.Text = response 'How to display cookies as key/value in ListBox? 'PREF=ID=5e770c1a9f279d5f:TM=1274032511:LM=1274032511:S=1RDPaKJKpoMT9T54 For Each mycc In webClient.cc.GetCookies(New Uri(URL)) ListBox1.Items.Add(mycc.ToString) Next End Sub End Class Public Class CookieAwareWebClient Inherits WebClient Public cc As New CookieContainer() Private lastPage As String Protected Overrides Function GetWebRequest(ByVal address As System.Uri) As System.Net.WebRequest Dim R = MyBase.GetWebRequest(address) If TypeOf R Is HttpWebRequest Then With DirectCast(R, HttpWebRequest) .CookieContainer = cc If Not lastPage Is Nothing Then .Referer = lastPage End If End With End If lastPage = address.ToString() Return R End Function End Class Thank you.

    Read the article

  • Cant access NString after callback in [NSURLConnection sendSynchronousRequest]

    - by John ClearZ
    Hi I am trying to get a cookie from a site which I can do no problem. The problem arises when I try and save the cookie to a NSString in a holder class or anywhere else for that matter and try and access it outside the delegate method where it is first created. - (void)connection:(NSURLConnection *)connection didReceiveResponse:(NSURLResponse *)response { int i; NSString* c; NSArray* all = [NSHTTPCookie cookiesWithResponseHeaderFields:[response allHeaderFields] forURL:[NSURL URLWithString:@"http://johncleary.net"]]; //NSLog(@"RESPONSE HEADERS: \n%@", [response allHeaderFields]); for (i=0;i<[all count];i++) { NSHTTPCookie* cc = [all objectAtIndex: i]; c = [NSString stringWithFormat: @"%@=%@", [cc name], [cc value]]; [Cookie setCookie: c]; NSLog([Cookie cookie]) // Prints the cookie fine. } [receivedData setLength:0]; } I can see and print the cookie when I am in the method but I cant when trying to access it form anywhere else even though it gets stored in the holder class @interface Cookie : NSObject { NSString* cookie; } + (NSString*) cookie; + (void) setCookie: (NSString*) cookieValue; @end int main (void) { NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init]; JCLogin* login; login = [JCLogin new]; [login DoLogin]; NSLog([Cookie cookie]); // Crashes the program [pool drain]; return 0; }

    Read the article

  • How to keep track of call statistics? C++

    - by tf.rz
    I'm working on a project that delivers statistics to the user. I created a class called Dog, And it has several functions. Speak, woof, run, fetch, etc. I want to have a function that spits out how many times each function has been called. I'm also interested in the constructor calls and destructor calls as well. I have a header file which defines all the functions, then a separate .cc file that implements them. My question is, is there a way to keep track of how many times each function is called? I have a function called print that will fetch the "statistics" and then output them to standard output. I was considering using static integers as part of the class itself, declaring several integers to keep track of those things. I know the compiler will create a copy of the integer and initialize it to a minimum value, and then I'll increment the integers in the .cc functions. I also thought about having static integers as a global variable in the .cc. Which way is easier? Or is there a better way to do this? Any help is greatly appreciated!

    Read the article

  • compile c problem in emacs (ubuntu)

    - by user565739
    I wrote a very simple program like: ( sorry, I typed the code in the right way, but the display is wired. How could I fix it?) #include <stdio.h> int main( void ) { int i; for ( i = 0; i <= 10; i++ ) { printf( "%d hello!\n", i); } return 0; } Usually, I compile c program in terminal with the command cc -o xxx xxx.c So in Emacs, when I type M-x compile, I change make -k to cc -o. But I got error like cc: argument to '-o' is missing What's the problem? If I use make, then I still got error No targets specified and no makefiles found. Finally, if the above problem is fixed, how could I define a custom hotkey for compile? I have already know how to do something like global-set-key [f8] 'goto-line But I don't know to set a hotkey for an action only for c-mode.

    Read the article

  • Nagging As A Strategy For Better Linking: -z guidance

    - by user9154181
    The link-editor (ld) in Solaris 11 has a new feature that we call guidance that is intended to help you build better objects. The basic idea behind guidance is that if (and only if) you request it, the link-editor will issue messages suggesting better options and other changes you might make to your ld command to get better results. You can choose to take the advice, or you can disable specific types of guidance while acting on others. In some ways, this works like an experienced friend leaning over your shoulder and giving you advice — you're free to take it or leave it as you see fit, but you get nudged to do a better job than you might have otherwise. We use guidance to build the core Solaris OS, and it has proven to be useful, both in improving our objects, and in making sure that regressions don't creep back in later. In this article, I'm going to describe the evolution in thinking and design that led to the implementation of the -z guidance option, as well as give a brief description of how it works. The guidance feature issues non-fatal warnings. However, experience shows that once developers get used to ignoring warnings, it is inevitable that real problems will be lost in the noise and ignored or missed. This is why we have a zero tolerance policy against build noise in the core Solaris OS. In order to get maximum benefit from -z guidance while maintaining this policy, I added the -z fatal-warnings option at the same time. Much of the material presented here is adapted from the arc case: PSARC 2010/312 Link-editor guidance The History Of Unfortunate Link-Editor Defaults The Solaris link-editor is one of the oldest Unix commands. It stands to reason that this would be true — in order to write an operating system, you need the ability to compile and link code. The original link-editor (ld) had defaults that made sense at the time. As new features were needed, command line option switches were added to let the user use them, while maintaining backward compatibility for those who didn't. Backward compatibility is always a concern in system design, but is particularly important in the case of the tool chain (compilers, linker, and related tools), since it is a basic building block for the entire system. Over the years, applications have grown in size and complexity. Important concepts like dynamic linking that didn't exist in the original Unix system were invented. Object file formats changed. In the case of System V Release 4 Unix derivatives like Solaris, the ELF (Extensible Linking Format) was adopted. Since then, the ELF system has evolved to provide tools needed to manage today's larger and more complex environments. Features such as lazy loading, and direct bindings have been added. In an ideal world, many of these options would be defaults, with rarely used options that allow the user to turn them off. However, the reality is exactly the reverse: For backward compatibility, these features are all options that must be explicitly turned on by the user. This has led to a situation in which most applications do not take advantage of the many improvements that have been made in linking over the last 20 years. If their code seems to link and run without issue, what motivation does a developer have to read a complex manpage, absorb the information provided, choose the features that matter for their application, and apply them? Experience shows that only the most motivated and diligent programmers will make that effort. We know that most programs would be improved if we could just get you to use the various whizzy features that we provide, but the defaults conspire against us. We have long wanted to do something to make it easier for our users to use the linkers more effectively. There have been many conversations over the years regarding this issue, and how to address it. They always break down along the following lines: Change ld Defaults Since the world would be a better place the newer ld features were the defaults, why not change things to make it so? This idea is simple, elegant, and impossible. Doing so would break a large number of existing applications, including those of ISVs, big customers, and a plethora of existing open source packages. In each case, the owner of that code may choose to follow our lead and fix their code, or they may view it as an invitation to reconsider their commitment to our platform. Backward compatibility, and our installed base of working software, is one of our greatest assets, and not something to be lightly put at risk. Breaking backward compatibility at this level of the system is likely to do more harm than good. But, it sure is tempting. New Link-Editor One might create a new linker command, not called 'ld', leaving the old command as it is. The new one could use the same code as ld, but would offer only modern options, with the proper defaults for features such as direct binding. The resulting link-editor would be a pleasure to use. However, the approach is doomed to niche status. There is a vast pile of exiting code in the world built around the existing ld command, that reaches back to the 1970's. ld use is embedded in large and unknown numbers of makefiles, and is used by name by compilers that execute it. A Unix link-editor that is not named ld will not find a majority audience no matter how good it might be. Finally, a new linker command will eventually cease to be new, and will accumulate its own burden of backward compatibility issues. An Option To Make ld Do The Right Things Automatically This line of reasoning is best summarized by a CR filed in 2005, entitled 6239804 make it easier for ld(1) to do what's best The idea is to have a '-z best' option that unchains ld from its backward compatibility commitment, and allows it to turn on the "best" set of features, as determined by the authors of ld. The specific set of features enabled by -z best would be subject to change over time, as requirements change. This idea is more realistic than the other two, but was never implemented because it has some important issues that we could never answer to our satisfaction: The -z best proposal assumes that the user can turn it on, and trust it to select good options without the user needing to be aware of the options being applied. This is a fallacy. Features such as direct bindings require the user to do some analysis to ensure that the resulting program will still operate properly. A user who is willing to do the work to verify that what -z best does will be OK for their application is capable of turning on those features directly, and therefore gains little added benefit from -z best. The intent is that when a user opts into -z best, that they understand that z best is subject to sometimes incompatible evolution. Experience teaches us that this won't work. People will use this feature, the meaning of -z best will change, code that used to build will fail, and then there will be complaints and demands to retract the change. When (not if) this occurs, we will of course defend our actions, and point at the disclaimer. We'll win some of those debates, and lose others. Ultimately, we'll end up with -z best2 (-z better), or other compromises, and our goal of simplifying the world will have failed. The -z best idea rolls up a set of features that may or may not be related to each other into a unit that must be taken wholesale, or not at all. It could be that only a subset of what it does is compatible with a given application, in which case the user is expected to abandon -z best and instead set the options that apply to their application directly. In doing so, they lose one of the benefits of -z best, that if you use it, future versions of ld may choose a different set of options, and automatically improve the object through the act of rebuilding it. I drew two conclusions from the above history: For a link-editor, backward compatibility is vital. If a given command line linked your application 10 years ago, you have every reason to expect that it will link today, assuming that the libraries you're linking against are still available and compatible with their previous interfaces. For an application of any size or complexity, there is no substitute for the work involved in examining the code and determining which linker options apply and which do not. These options are largely orthogonal to each other, and it can be reasonable not to use any or all of them, depending on the situation, even in modern applications. It is a mistake to tie them together. The idea for -z guidance came from consideration of these points. By decoupling the advice from the act of taking the advice, we can retain the good aspects of -z best while avoiding its pitfalls: -z guidance gives advice, but the decision to take that advice remains with the user who must evaluate its merit and make a decision to take it or not. As such, we are free to change the specific guidance given in future releases of ld, without breaking existing applications. The only fallout from this will be some new warnings in the build output, which can be ignored or dealt with at the user's convenience. It does not couple the various features given into a single "take it or leave it" option, meaning that there will never be a need to offer "-zguidance2", or other such variants as things change over time. Guidance has the potential to be our final word on this subject. The user is given the flexibility to disable specific categories of guidance without losing the benefit of others, including those that might be added to future versions of the system. Although -z fatal-warnings stands on its own as a useful feature, it is of particular interest in combination with -z guidance. Used together, the guidance turns from advice to hard requirement: The user must either make the suggested change, or explicitly reject the advice by specifying a guidance exception token, in order to get a build. This is valuable in environments with high coding standards. ld Command Line Options The guidance effort resulted in new link-editor options for guidance and for turning warnings into fatal errors. Before I reproduce that text here, I'd like to highlight the strategic decisions embedded in the guidance feature: In order to get guidance, you have to opt in. We hope you will opt in, and believe you'll get better objects if you do, but our default mode of operation will continue as it always has, with full backward compatibility, and without judgement. Guidance suggestions always offers specific advice, and not vague generalizations. You can disable some guidance without turning off the entire feature. When you get guidance warnings, you can choose to take the advice, or you can specify a keyword to disable guidance for just that category. This allows you to get guidance for things that are useful to you, without being bothered about things that you've already considered and dismissed. As the world changes, we will add new guidance to steer you in the right direction. All such new guidance will come with a keyword that let's you turn it off. In order to facilitate building your code on different versions of Solaris, we quietly ignore any guidance keywords we don't recognize, assuming that they are intended for newer versions of the link-editor. If you want to see what guidance tokens ld does and does not recognize on your system, you can use the ld debugging feature as follows: % ld -Dargs -z guidance=foo,nodefs debug: debug: Solaris Linkers: 5.11-1.2275 debug: debug: arg[1] option=-D: option-argument: args debug: arg[2] option=-z: option-argument: guidance=foo,nodefs debug: warning: unrecognized -z guidance item: foo The -z fatal-warning option is straightforward, and generally useful in environments with strict coding standards. Note that the GNU ld already had this feature, and we accept their option names as synonyms: -z fatal-warnings | nofatal-warnings --fatal-warnings | --no-fatal-warnings The -z fatal-warnings and the --fatal-warnings option cause the link-editor to treat warnings as fatal errors. The -z nofatal-warnings and the --no-fatal-warnings option cause the link-editor to treat warnings as non-fatal. This is the default behavior. The -z guidance option is defined as follows: -z guidance[=item1,item2,...] Provide guidance messages to suggest ld options that can improve the quality of the resulting object, or which are otherwise considered to be beneficial. The specific guidance offered is subject to change over time as the system evolves. Obsolete guidance offered by older versions of ld may be dropped in new versions. Similarly, new guidance may be added to new versions of ld. Guidance therefore always represents current best practices. It is possible to enable guidance, while preventing specific guidance messages, by providing a list of item tokens, representing the class of guidance to be suppressed. In this way, unwanted advice can be suppressed without losing the benefit of other guidance. Unrecognized item tokens are quietly ignored by ld, allowing a given ld command line to be executed on a variety of older or newer versions of Solaris. The guidance offered by the current version of ld, and the item tokens used to disable these messages, are as follows. Specify Required Dependencies Dynamic executables and shared objects should explicitly define all of the dependencies they require. Guidance recommends the use of the -z defs option, should any symbol references remain unsatisfied when building dynamic objects. This guidance can be disabled with -z guidance=nodefs. Do Not Specify Non-Required Dependencies Dynamic executables and shared objects should not define any dependencies that do not satisfy the symbol references made by the dynamic object. Guidance recommends that unused dependencies be removed. This guidance can be disabled with -z guidance=nounused. Lazy Loading Dependencies should be identified for lazy loading. Guidance recommends the use of the -z lazyload option should any dependency be processed before either a -z lazyload or -z nolazyload option is encountered. This guidance can be disabled with -z guidance=nolazyload. Direct Bindings Dependencies should be referenced with direct bindings. Guidance recommends the use of the -B direct, or -z direct options should any dependency be processed before either of these options, or the -z nodirect option is encountered. This guidance can be disabled with -z guidance=nodirect. Pure Text Segment Dynamic objects should not contain relocations to non-writable, allocable sections. Guidance recommends compiling objects with Position Independent Code (PIC) should any relocations against the text segment remain, and neither the -z textwarn or -z textoff options are encountered. This guidance can be disabled with -z guidance=notext. Mapfile Syntax All mapfiles should use the version 2 mapfile syntax. Guidance recommends the use of the version 2 syntax should any mapfiles be encountered that use the version 1 syntax. This guidance can be disabled with -z guidance=nomapfile. Library Search Path Inappropriate dependencies that are encountered by ld are quietly ignored. For example, a 32-bit dependency that is encountered when generating a 64-bit object is ignored. These dependencies can result from incorrect search path settings, such as supplying an incorrect -L option. Although benign, this dependency processing is wasteful, and might hide a build problem that should be solved. Guidance recommends the removal of any inappropriate dependencies. This guidance can be disabled with -z guidance=nolibpath. In addition, -z guidance=noall can be used to entirely disable the guidance feature. See Chapter 7, Link-Editor Quick Reference, in the Linker and Libraries Guide for more information on guidance and advice for building better objects. Example The following example demonstrates how the guidance feature is intended to work. We will build a shared object that has a variety of shortcomings: Does not specify all it's dependencies Specifies dependencies it does not use Does not use direct bindings Uses a version 1 mapfile Contains relocations to the readonly allocable text (not PIC) This scenario is sadly very common — many shared objects have one or more of these issues. % cat hello.c #include <stdio.h> #include <unistd.h> void hello(void) { printf("hello user %d\n", getpid()); } % cat mapfile.v1 # This version 1 mapfile will trigger a guidance message % cc hello.c -o hello.so -G -M mapfile.v1 -lelf As you can see, the operation completes without error, resulting in a usable object. However, turning on guidance reveals a number of things that could be better: % cc hello.c -o hello.so -G -M mapfile.v1 -lelf -zguidance ld: guidance: version 2 mapfile syntax recommended: mapfile.v1 ld: guidance: -z lazyload option recommended before first dependency ld: guidance: -B direct or -z direct option recommended before first dependency Undefined first referenced symbol in file getpid hello.o (symbol belongs to implicit dependency /lib/libc.so.1) printf hello.o (symbol belongs to implicit dependency /lib/libc.so.1) ld: warning: symbol referencing errors ld: guidance: -z defs option recommended for shared objects ld: guidance: removal of unused dependency recommended: libelf.so.1 warning: Text relocation remains referenced against symbol offset in file .rodata1 (section) 0xa hello.o getpid 0x4 hello.o printf 0xf hello.o ld: guidance: position independent (PIC) code recommended for shared objects ld: guidance: see ld(1) -z guidance for more information Given the explicit advice in the above guidance messages, it is relatively easy to modify the example to do the right things: % cat mapfile.v2 # This version 2 mapfile will not trigger a guidance message $mapfile_version 2 % cc hello.c -o hello.so -Kpic -G -Bdirect -M mapfile.v2 -lc -zguidance There are situations in which the guidance does not fit the object being built. For instance, you want to build an object without direct bindings: % cc -Kpic hello.c -o hello.so -G -M mapfile.v2 -lc -zguidance ld: guidance: -B direct or -z direct option recommended before first dependency ld: guidance: see ld(1) -z guidance for more information It is easy to disable that specific guidance warning without losing the overall benefit from allowing the remainder of the guidance feature to operate: % cc -Kpic hello.c -o hello.so -G -M mapfile.v2 -lc -zguidance=nodirect Conclusions The linking guidelines enforced by the ld guidance feature correspond rather directly to our standards for building the core Solaris OS. I'm sure that comes as no surprise. It only makes sense that we would want to build our own product as well as we know how. Solaris is usually the first significant test for any new linker feature. We now enable guidance by default for all builds, and the effect has been very positive. Guidance helps us find suboptimal objects more quickly. Programmers get concrete advice for what to change instead of vague generalities. Even in the cases where we override the guidance, the makefile rules to do so serve as documentation of the fact. Deciding to use guidance is likely to cause some up front work for most code, as it forces you to consider using new features such as direct bindings. Such investigation is worthwhile, but does not come for free. However, the guidance suggestions offer a structured and straightforward way to tackle modernizing your objects, and once that work is done, for keeping them that way. The investment is often worth it, and will replay you in terms of better performance and fewer problems. I hope that you find guidance to be as useful as we have.

    Read the article

  • Nginx compiled --with-http_spdy_module yet raise errors complains ngx_http_spdy_module

    - by c19
    [emerg] 21101#0: the "spdy" parameter requires ngx_http_spdy_module in /etc/nginx/conf.d/cc.conf isn't it the same module? and it causes multi-redirection error too. I have no idea what is going on. Full configure arg: nginx version: nginx/1.4.2 built by gcc 4.4.6 20120305 (Red Hat 4.4.6-4) (GCC) TLS SNI support enabled configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-http_ssl_module --with-http_realip_module --with-http_addition_module --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_mp4_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_random_index_module --with-http_secure_link_module --with-http_stub_status_module --with-mail --with-mail_ssl_module --with-file-aio --with-ipv6 --with-cc-opt='-O2 -g -pipe -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic' --with-pcre --with-http_ssl_module `--with-http_spdy_module` --with-http_gunzip_module --with-http_gzip_static_module --with-http_stub_status_module --with-openssl=/usr/local/src/openssl-1.0.1e

    Read the article

  • Rackspace Cloud Sites: Compute Cycles exploding. Very expensive

    - by Jaap
    Since last week my compute cycles (CC) went through the roof (Rackspace Cloud Sites). Normally I stay under the 10,000 cycles per month . Now this month I already have more than 75,000 compute cycles. I don't have more visitors and I did not change anything in the code. I looked in the raw log files, that didn't help either... This explosion of CC already costs me more than 750 USD right now. And still counting. Anyone know what to do? I have contacted Rackspace last week. But still no solution/answer.... Looks like Rackspace is liking the money! Help! Thanks.

    Read the article

  • Running a home mail server using dynamic dns [closed]

    - by Anand
    Hi, Is it possible to run an email server on my home box using dynamic dns? The scenario is, I want to auto cc all incoming and outgoing emails from my one account to another, from some server side config instead of configuring email clients for rules. I have tried Google Apps Mail but it doesn't allow auto cc of outgoing emails. After having read tons of blogs, forum messages etc (hope I have been reading the correct info :) ) the only option to achieve what I am needing is to setup my own mail server, but the cost of getting a static IP doesn't fit my budget. Please can someone point me in the correct direction. Platform doesn't matter, I can setup a Windows or Linux server. Many Thanks

    Read the article

  • What is the correct cipher name for RC4 in Chrome?

    - by qbi
    I want to remove RC4 from Google Chrome and found the commandline option --cipher-suite-blacklist. However I wasn't able to figure out what the correct notation for RC4 is. Whatever I tried so far only brought the message: ERROR:ssl_config_service_manager_pref.cc(55)] Ignoring unrecognized or \ unparsable cipher suite: Even the names listed in ssl_cipher_suite_names.cc don't work. What should I enter to remove RC4 as a cipher for SSL/TLS? I'm working with some different versions of GNU/Linux and sometimes also with Windows. So it would be nice if the command-line argument would work under all OSes. I used the following command: chrome --cipher-suite-blacklist=TLS_RSA_WITH_RC4_128_MD5 --ssl-version-min=tls1.1 chrome --cipher-suite-blacklist=RC4 --ssl-version-min=tls1.1 chrome --cipher-suite-blacklist=0xXYZ,0xUVW --ssl-version-min=tls1.1 # XYZ and UVW are some hexadecimal numbers

    Read the article

  • Can't reach custom C# forms application remotely.

    - by gnucom
    Hello, I'm working in Windows Server 2008. I have a very basic C# forms application (not a service) that is listening on a port, say 56112. When using telnet I can connect from the localhost and send and receive data. For some reason I cannot remotely connect to the application. I know I have a connection because I can telnet to 23 on the remotely fine. I've opened this port on the firewall, created rules in/out in advanced firewall, disabled the firewall completely, and more. Any suggestions would be great! This is the telnet output: Microsoft Telnet> open server.cc 56112 Connecting server.cc...Could not open connection to the host, on port 56112: Connect failed

    Read the article

  • cruisecontrol.net failing build with no errors

    - by John Hoge
    Hi, I've been using CCNet for some time now but just upgraded to .net 4. I've installed the new framework on my dev box along with vs2010 and on my CC.net server as well. I've just installed CruiseControl.net version 1.5.6804.1, and changed my MSBuild tasks to point to the new v4.0.30319 framework directory. I've got two projects on .net 4 now that just don't build. They build perfectly well in VS and run well on both Cassini and IIS. I just don't get any error messages from cc.net, just this: BUILD FAILED Project: GoodBay Date of build: 2010-05-11 10:21:59 Running time: 00:00:02 Integration Request: Build (ForceBuild) triggered from SVN Modifications since last build (0)

    Read the article

  • error while installing binutils in LFS

    - by user53347
    lfs:/mnt/lfs/sources/binutils-build$ ../binutils-2.15.94.0.2.2/configure \ --target=$LFS_TGT --prefix=/tools \ --disable-nls --disable-werror loading cache ./config.cache checking host system type... i686-pc-linux-gnuoldld checking target system type... i686-lfs-linux-gnu checking build system type... i686-pc-linux-gnuoldld checking for a BSD compatible install... /usr/bin/install -c checking whether ln works... yes checking whether ln -s works... yes checking for gcc... no checking for cc... no configure: error: no acceptable cc found in $PATH

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >