Search Results

Search found 22043 results on 882 pages for 'int ua'.

Page 237/882 | < Previous Page | 233 234 235 236 237 238 239 240 241 242 243 244  | Next Page >

  • Where can we use "This" in Recursion methode...

    - by T_Geek
    Hi . I have a methode in a static class which try to convert bi*nar*y tre to list I'd like to make a recusion but I couldn't I've implemented some opertation in my class like add,delet,find.it's too easy to implement its . Here is the code class ARB { private: struct BT { int data; BT *l; BT *r; }; struct BT *p; public ARB(); ~ARB(); void del(int n); void add(int n); }; void ARB::del(int num) { //The code ,don't care about it }; main() { // BTR T; T.add(3); T.add(5); }; Here is what should we do to transfert the code from Benary tree tolist LLC ARB::changeit() { LLC x; while(this!=NULL) { x.add(this->data); // if(this.l==NULL) { x.print(); //To print the elemnts of List return(x); } else { x=changeit(this.l); } if(this.r!=NULL) { x.~LLC(); x=changeit(this.r); return(x); } } }

    Read the article

  • Entity Framework 1:1 Relationship Identity Colums

    - by Hupperware
    I cannot for the life of me get two separate tables to save with a relationship and two identity columns. I keep getting errors like A dependent property in a ReferentialConstraint is mapped to a store-generated column. Column: 'OlsTerminalId'. I can't imagine that this setup is abnormal. [DataContract] public class OlsData { private static readonly DateTime theUnixEpoch = new DateTime(1970, 1, 1, 0, 0, 0, 0, DateTimeKind.Utc); [DatabaseGenerated(DatabaseGeneratedOption.Identity)] public int OlsDataId { get; set; } [DataMember(Name = "id")] public int Id { get; set; } public DateTime Timestamp { get; set; } [DataMember(Name = "created")] [NotMapped] public double Created { get { return (Timestamp - theUnixEpoch).TotalMilliseconds; } set { Timestamp = theUnixEpoch.AddMilliseconds(value); } } [InverseProperty("OlsData")] [DataMember(Name = "terminal")] public virtual OlsTerminal OlsTerminal { get; set; } } [DataContract] public class OlsTerminal { [DatabaseGenerated(DatabaseGeneratedOption.Identity)] public int OlsTerminalId { get; set; } public int OlsDataId { get; set; } [DataMember(Name = "account")] [NotMapped] public virtual OlsAccount OlsAccount { get; set; } [DataMember(Name = "terminalId")] public string TerminalId { get; set; } [DataMember(Name = "merchantId")] public string MerchantId { get; set; } public virtual OlsData OlsData { get; set; } } public class OlsDataContext : DbContext { protected override void OnModelCreating(DbModelBuilder aModelBuilder) { aModelBuilder.Conventions.Remove<PluralizingTableNameConvention>(); } public OlsDataContext(string aConnectionString) : base(aConnectionString) {} public DbSet<OlsData> OlsData { get; set; } public DbSet<OlsTerminal> OlsTerminal { get; set; } public DbSet<OlsAccount> OlsAccount { get; set; } }

    Read the article

  • std::vector elements initializing

    - by Chameleon
    std::vector<int> v1(1000); std::vector<std::vector<int>> v2(1000); std::vector<std::vector<int>::const_iterator> v3(1000); How elements of these 3 vectors initialized? About int, I test it and I saw that all elements become 0. Is this standard? I believed that primitives remain undefined. I create a vector with 300000000 elements, give non-zero values, delete it and recreate it, to avoid OS memory clear for data safety. Elements of recreated vector were 0 too. What about iterator? Is there a initial value (0) for default constructor or initial value remains undefined? When I check this, iterators point to 0, but this can be OS When I create a special object to track constructors, I saw that for first object, vector run the default constructor and for all others it run the copy constructor. Is this standard? Is there a way to completely avoid initialization of elements? Or I must create my own vector? (Oh my God, I always say NOT ANOTHER VECTOR IMPLEMENTATION) I ask because I use ultra huge sparse matrices with parallel processing, so I cannot use push_back() and of course I don't want useless initialization, when later I will change the value.

    Read the article

  • Calling a javascript function from an aspx.cs code behind

    - by David Hodgson
    Hi, I would like to call a javascript function from an aspx control. For instance, suppose I had: <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server"> <title>Untitled Page</title> <script type="text/javascript"> function test(x, y) { } </script> </head> <body> <form id="form1" runat="server"> <div> <asp:Button ID="Button1" runat="server" Text="Button" onclick="Button1_Click"/> </div> </form> </body> </html> and in the code behind: protected void Button1_Click(object sender, EventArgs e) { // do stuff (really going to a database to fill x and y) int[] x = new int[] { 1, 2, 3, 4, 5 }; int[] y = new int[] { 1, 2, 3, 4, 5 }; // call javascript function as test(x,y); } Is there a way to do it? DUPLICATE:calling-a-javascript-function-at-the-end-of-button-click-code-behind

    Read the article

  • Creating an object in the loop

    - by Jacob
    std::vector<double> C(4); for(int i = 0; i < 1000;++i) for(int j = 0; j < 2000; ++j) { C[0] = 1.0; C[1] = 1.0; C[2] = 1.0; C[3] = 1.0; } is much faster than for(int i = 0; i < 1000;++i) for(int j = 0; j < 2000; ++j) { std::vector<double> C(4); C[0] = 1.0; C[1] = 1.0; C[2] = 1.0; C[3] = 1.0; } I realize this happens because std::vector is repeatedly being created and instantiated in the loop, but I was under the impression this would be optimized away. Is it completely wrong to keep variables local in a loop whenever possible? I was under the (perhaps false) impression that this would provide optimization opportunities for the compiler. EDIT: I use VC++2005 (release mode) with full optimization (/Ox)

    Read the article

  • Does everything after my try statement have to be encompassed in that try statement to access variab

    - by Mithrax
    I'm learning java and one thing I've found that I don't like, is generally when I have code like this: import java.util.*; import java.io.*; public class GraphProblem { public static void main(String[] args) { if (args.length < 2) { System.out.println("Error: Please specify a graph file!"); return; } FileReader in = new FileReader(args[1]); Scanner input = new Scanner(in); int size = input.nextInt(); WeightedGraph graph = new WeightedGraph(size); for(int i = 0; i < size; i++) { graph.setLabel(i,Character.toString((char)('A' + i))); } for(int i = 0; i < size; i++) { for(int j = 0; j < size; j++) { graph.addEdge(i, j, input.nextInt()); } } // .. lots more code } } I have an uncaught exception around my FileReader. So, I have to wrap it in a try-catch to catch that specific exception. My question is does that try { } have to encompass everything after that in my method that wants to use either my FileReader (in) or my Scanner (input)? If I don't wrap the whole remainder of the program in that try statement, then anything outside of it can't access the in/input because it may of not been initialized or has been initialized outside of its scope. So I can't isolate the try-catch to just say the portion that intializes the FileReader and close the try statement immediately after that. So, is it the "best practice" to have the try statement wrapping all portions of the code that are going to access variables initialized in it? Thanks!

    Read the article

  • cannot convert from string to system.windows.forms.string iwin32window

    - by Michael Quiles
    This is supposed to show the winner in the xWinner form label but I cant figure it out. xWinnerForm.Show(b1.Text);. I'm new to c# so can you please explain in layman terms thanks. static public bool CheckWinner(Button[] myControls) { bool gameOver = false; for (int i = 0; i < 8; i++) { int a = Winners[i, 0]; int b = Winners[i, 1]; int c = Winners[i, 2]; Button b1 = myControls[a], b2 = myControls[b], b3 = myControls[c]; if (b1.Text == "" || b2.Text == "" || b3.Text == "") continue; if (b1.Text == b2.Text && b2.Text == b3.Text) { gameOver = true; Form xWinnerForm = new xWinnerForm(); xWinnerForm.Show(b1.Text); } public void Show(string text) { this.xWinnerLabel.Text = text; this.Show(); } } return gameOver; }

    Read the article

  • How do I controll clipping with non-opaque graphics-item's in Qt?

    - by JJacobsson
    I have a bunch of QGraphicsSvgItem's in a QGraphicsScene that are drawn connected by QGraphicsLineItem's. This show's a graph of a tree-structure. What I want to do is provide a feature where everything but a selected sub-tree becomes transparent. A kind of "highlight this sub-tree" feature. That part was easy, but the results are ugly because now the lines can be seen through the semi-transparent svg's. I am looking for some way to still clip other QGraphicsItem's in the scene to the svg item's, giving the effect that the svg's are semi-transparent windows to the background. I know this code does not use svg's but I figure you can replace that yourself if you are so inclined. int main(int argc, char *argv[]) { QApplication app(argc, argv); QGraphicsScene scene; for( int i = 0; i < 10; ++i ) { QGraphicsLineItem* line = new QGraphicsLineItem; line->setLine( i * 25.0 + 1.0, 0, i * 25.0 + 23.0, 0 ); scene.addItem( line ); } for( int i = 0; i < 11; ++i ) { QGraphicsEllipseItem* ellipse = new QGraphicsEllipseItem; ellipse->setRect( (i * 25.0) - 9.0, -9.0, 18.0, 18.0f ); ellipse->setBrush( QBrush( Qt::green, Qt::SolidPattern ) ); ellipse->setOpacity( 0.5 ); scene.addItem( ellipse ); } QGraphicsView view( &scene ); view.show(); return app.exec(); } I would like the line's to not be seen behind the circle's. I have tried fiddling with the depth-buffer and the stencil buffer using opengl rendering to no avail. How do I get the QGraphicsSvgItem's (or QGraphicsEllipseItem's in the example code) to still clip the lines even though they are semi-transparent?

    Read the article

  • CURL C API: callback was not called

    - by Pierre
    Hi all, The code below is a test for the CURL C API . The problem is that the callback function write_callback is never called. Why ? /** compilation: g++ source.cpp -lcurl */ #include <assert.h> #include <iostream> #include <cstdlib> #include <cstring> #include <cassert> #include <curl/curl.h> using namespace std; static size_t write_callback(void *ptr, size_t size, size_t nmemb, void *userp) { std::cerr << "CALLBACK WAS CALLED" << endl; exit(-1); return size*nmemb; } static void test_curl() { int any_data=1; CURLM* multi_handle=NULL; CURL* handle_curl = ::curl_easy_init(); assert(handle_curl!=NULL); ::curl_easy_setopt(handle_curl, CURLOPT_URL, "http://en.wikipedia.org/wiki/Main_Page"); ::curl_easy_setopt(handle_curl, CURLOPT_WRITEDATA, &any_data); ::curl_easy_setopt(handle_curl, CURLOPT_VERBOSE, 1); ::curl_easy_setopt(handle_curl, CURLOPT_WRITEFUNCTION, write_callback); ::curl_easy_setopt(handle_curl, CURLOPT_USERAGENT, "libcurl-agent/1.0"); multi_handle = ::curl_multi_init(); assert(multi_handle!=NULL); ::curl_multi_add_handle(multi_handle, handle_curl); int still_running=0; /* lets start the fetch */ while(::curl_multi_perform(multi_handle, &still_running) == CURLM_CALL_MULTI_PERFORM ); std::cerr << "End of curl_multi_perform."<< endl; //cleanup should go here ::exit(EXIT_SUCCESS); } int main(int argc,char** argv) { test_curl(); return 0; } Many thanks Pierre

    Read the article

  • cant open device

    - by yoavstr
    I have a little problem. I install this module into my kernel and its written under /proc When I try to open() it from user mode I get the following message: "Can't open device file: my_dev" static int module_permission(struct inode *inode, int op, struct nameidata *foo) { //if its write if ((op == 2)&&(writer == DOESNT_EXIST)){ writer = EXIST ; return 0; } //if its read if (op == 4 ){ numOfReaders++; return 0; } return -EACCES; } int procfs_open(struct inode *inode, struct file *file) { try_module_get(THIS_MODULE); return 0; } static struct file_operations File_Ops_4_Our_Proc_File = { .read = procfs_read, .write = procfs_write, .open = procfs_open, .release = procfs_close, }; static struct inode_operations Inode_Ops_4_Our_Proc_File = { .permission = module_permission, /* check for permissions */ }; int init_module() { /* create the /proc file */ Our_Proc_File = create_proc_entry(PROC_ENTRY_FILENAME, 0644, NULL); /* check if the /proc file was created successfuly */ if (Our_Proc_File == NULL){ printk(KERN_ALERT "Error: Could not initialize /proc/%s\n", PROC_ENTRY_FILENAME); return -ENOMEM; } Our_Proc_File->owner = THIS_MODULE; Our_Proc_File->proc_iops = &Inode_Ops_4_Our_Proc_File; Our_Proc_File->proc_fops = &File_Ops_4_Our_Proc_File; Our_Proc_File->mode = S_IFREG | S_IRUGO | S_IWUSR; Our_Proc_File->uid = 0; Our_Proc_File->gid = 0; Our_Proc_File->size = 80; //i added init the writewr status writer = DOESNT_EXIST; numOfReaders = 0 ; printk(KERN_INFO "/proc/%s created\n", PROC_ENTRY_FILENAME); return 0; }

    Read the article

  • Any problems with this C++ const reference accessor interface idiom?

    - by mskfisher
    I was converting a struct to a class so I could enforce a setter interface for my variables. I did not want to change all of the instances where the variable was read, though. So I converted this: struct foo_t { int x; float y; }; to this: class foo_t { int _x; float _y; public: foot_t() : x(_x), y(_y) { set(0, 0.0); } const int &x; const float &y; set(int x, float y) { _x = x; _y = y; } }; I'm interested in this because it seems to model C#'s idea of public read-only properties. Compiles fine, and I haven't seen any problems yet. Besides the boilerplate of associating the const references in the constructor, what are the downsides to this method? Any strange aliasing issues? Why haven't I seen this idiom before?

    Read the article

  • What data stucture should I use for BigInt class

    - by user1086004
    I would like to implement a BigInt class which will be able to handle really big numbers. I want only to add and multiply numbers, however the class should also handle negative numbers. I wanted to represent the number as a string, but there is a big overhead with converting string to int and back for adding. I want to implement addition as on the high school, add corresponding order and if the result is bigger than 10, add the carry to next order. Then I thought that it would be better to handle it as a array of unsigned long long int and keep the sign separated by bool. With this I'm afraid of size of the int, as C++ standard as far as I know guarantees only that int < float < double. Correct me if I'm wrong. So when I reach some number I should move in array forward and start adding number to the next array position. Is there any data structure that is appropriate or better for this? Thanks in advance.

    Read the article

  • Reference a GNU C DLL built in GCC against Cygwin, from C#/NET

    - by Dale Halliwell
    Here is what I want: I have a huge legacy C/C++ codebase written for POSIX, including some very POSIX specific stuff like pthreads. This can be compiled on Cygwin/GCC and run as an executable under Windows with the Cygwin DLL. What I would like to do is build the codebase itself into a Windows DLL that I can then reference from C# and write a wrapper around it to access some parts of it programatically. I have tried this approach with the very simple "hello world" example at http://www.cygwin.com/cygwin-ug-net/dll.html and it doesn't seem to work. #include <stdio.h> extern "C" __declspec(dllexport) int hello(); int hello() { printf ("Hello World!\n"); return 42; } I believe I should be able to reference a DLL built with the above code in C# using something like: [DllImport("kernel32.dll")] public static extern IntPtr LoadLibrary(string dllToLoad); [DllImport("kernel32.dll")] public static extern IntPtr GetProcAddress(IntPtr hModule, string procedureName); [DllImport("kernel32.dll")] public static extern bool FreeLibrary(IntPtr hModule); [UnmanagedFunctionPointer(CallingConvention.Cdecl)] private delegate int hello(); static void Main(string[] args) { var path = Path.Combine(AppDomain.CurrentDomain.BaseDirectory, "helloworld.dll"); IntPtr pDll = LoadLibrary(path); IntPtr pAddressOfFunctionToCall = GetProcAddress(pDll, "hello"); hello hello = (hello)Marshal.GetDelegateForFunctionPointer( pAddressOfFunctionToCall, typeof(hello)); int theResult = hello(); Console.WriteLine(theResult.ToString()); bool result = FreeLibrary(pDll); Console.ReadKey(); } But this approach doesn't seem to work. LoadLibrary returns null. It can find the DLL (helloworld.dll), it is just like it can't load it or find the exported function. I am sure that if I get this basic case working I can reference the rest of my codebase in this way. Any suggestions or pointers, or does anyone know if what I want is even possible? Thanks.

    Read the article

  • How to create custom query for CollectionOfElements

    - by Shervin
    Hi. I have problems creating a custom query. This is what entities: @Entity public class ApplicationProcess { @CollectionOfElements private Set<Template> defaultTemplates; //more fields } And Template.java @Embeddable @EqualsAndHashCode(exclude={"file", "selected", "used"}) public class Template implements Comparable<Template> { @Setter private ApplicationProcess applicationProcess; @Setter private Boolean used = Boolean.valueOf(false); public Template() { } @Parent public ApplicationProcess getApplicationProcess() { return applicationProcess; } @Column(nullable = false) @NotNull public String getName() { return name; } @Column(nullable = true) public Boolean isUsed() { return used; } public int compareTo(Template o) { return getName().compareTo(o.getName()); } } I want to create a update statement. I have tried these two: int v = entityManager.createQuery("update ApplicationProcess_defaultTemplates t set t.used = true " + "WHERE t.applicationProcess.id=:apId").setParameter("apId", ap.getId()) .executeUpdate(); ApplicationProcess_defaultTemplates is not mapped [update ApplicationProcess_defaultTemplates t set t.used = true WHERE t.applicationProcess.id=:apId] And I have tried int v = entityManager.createQuery("update Template t set t.used = true " + "WHERE t.applicationProcess.id=:apId").setParameter("apId", ap.getId()) .executeUpdate(); With the same error: Template is not mapped [update Template t set t.used = true WHERE t.applicationProcess.id=:apId] Any ideas? UPDATE I fixed it by creating native query int v = entityManager.createNativeQuery("update ApplicationProcess_defaultTemplates t set t.used=true where t.ApplicationProcess_id=:apId").setParameter("apId", ap.getId()).executeUpdate();

    Read the article

  • Combining Java hashcodes into a "master" hashcode

    - by Nick Wiggill
    I have a vector class with hashCode() implemented. It wasn't written by me, but uses 2 prime numbers by which to multiply the 2 vector components before XORing them. Here it is: /*class Vector2f*/ ... public int hashCode() { return 997 * ((int)x) ^ 991 * ((int)y); //large primes! } ...As this is from an established Java library, I know that it works just fine. Then I have a Boundary class, which holds 2 vectors, "start" and "end" (representing the endpoints of a line). The values of these 2 vectors are what characterize the boundary. /*class Boundary*/ ... public int hashCode() { return 1013 * (start.hashCode()) ^ 1009 * (end.hashCode()); } Here I have attempted to create a good hashCode() for the unique 2-tuple of vectors (start & end) constituting this boundary. My question: Is this hashCode() implementation going to work? (Note that I have used 2 different prime numbers in the latter hashCode() implementation; I don't know if this is necessary but better to be safe than sorry when trying to avoid common factors, I guess -- since I presume this is why primes are popular for hashing functions.)

    Read the article

  • QT NOOB: Add action handler for multiple objects of same type.

    - by what
    I have a simple QT application with 10 radio buttons with names radio_1 through radio_10. It is a ui called Selector, and is part of a class called TimeSelector In my header file for this design, i have this: //! [1] class TimeSelector : public QWidget { Q_OBJECT public: TimeSelector(QWidget *parent = 0); private slots: //void on_inputSpinBox1_valueChanged(int value); //void on_inputSpinBox2_valueChanged(int value); private: Ui::Selector ui; }; //! [1] the commented out void_on_inputSpinBox1_valueChanged(int value) is from the tutorial for the simple calculator. I know i can do: void on_radio_1_valueChanged(int value); but I would need 10 functions. I want to be able to make one function that works for everything, and lets me pass in maybe a name of the radio button that called it, or a reference to the radio button so i can work with it and determine who it was. I am very new to QT but this seems like it should be very basic and doable, thanks.

    Read the article

  • C++ Loop - Need variable to accumulate sum

    - by user1780064
    I'm writing a program to ask the user to enter a value between 5 and 21 (inclusive). If the number entered is not in this range, it prints, "Please try again". If the number is within the range, I need to take that number, and print the sum of all the numbers from 1 to the value entered. So if the user entered "7", the sum would be "28". I successfully wrote the first loop, in the case of the number not being within the range, but cannot figure out how to run the second loop- whether to use a while, do-while, or for loop. Please advise. #include <iostream> int main () { int uservalue; int count; int sum; //Prompt user for input do { cout << "Enter a value from 5 to 21: "; cin >> uservalue; if (uservalue < 5 || uservalue > 21) cout << "Value out of range. Try again..." << endl; } while (uservalue < 5 || uservalue > 21); cout << endl; //Loop to accumulate sum for (count = 1, count < uservalue, count++;) { sum = uservalue + count; if (uservalue <= 5 || uservalue <= 21) cout << the sum is " << sum << endl; } return 0; }

    Read the article

  • Contacts & Autocomplete

    - by Vince
    First post. I'm new to android and programming in general. What I'm attempting to is to have an autocomplete text box pop up with auto complete names from the contact list. IE, if they type in "john" it will say "John Smith" or any john in their contacts. The code is basic, I pulled it from a few tutorials. private void autoCompleteBox() { ContentResolver cr = getContentResolver(); Uri contacts = Uri.parse("content://contacts/people"); Cursor managedCursor1 = cr.query(contacts, null, null, null, null); if (managedCursor1.moveToFirst()) { String contactname; String cphoneNumber; int nameColumn = managedCursor1.getColumnIndex("name"); int phoneColumn = managedCursor1.getColumnIndex("number"); Log.d("int Name", Integer.toString(nameColumn)); Log.d("int Number", Integer.toString(phoneColumn)); do { // Get the field values contactname = managedCursor1.getString(nameColumn); cphoneNumber = managedCursor1.getString(phoneColumn); if ((contactname != " " || contactname != null) && (cphoneNumber != " " || cphoneNumber != null)) { c_Name.add(contactname); c_Number.add(cphoneNumber); Toast.makeText(this, contactname, Toast.LENGTH_SHORT) .show(); } } while (managedCursor1.moveToNext()); } name_Val = (String[]) c_Name.toArray(new String[c_Name.size()]); phone_Val = (String[]) c_Number.toArray(new String[c_Name.size()]); ArrayAdapter<String> adapter = new ArrayAdapter<String>(this, android.R.layout.simple_dropdown_item_1line, name_Val); personName.setAdapter(adapter); } personName is my autocompletetextbox. So it actually works when I use it in the emulator (4.2) with manually entered contacts through the people app, but when I use it on my device, it will not pop up with any names. I'm sure it's something ridiculous but I've tried to find the answer and I'm getting nowhere. Can't learn if I don't ask.

    Read the article

  • Can't store UTF-8 in RDS despite setting up new Parameter Group using Rails on Heroku

    - by Lail
    I'm setting up a new instance of a Rails(2.3.5) app on Heroku using Amazon RDS as the database. I'd like to use UTF-8 for everything. Since RDS isn't UTF-8 by default, I set up a new Parameter Group and switched the database to use that one, basically per this. Seems to have worked: SHOW VARIABLES LIKE '%character%'; character_set_client utf8 character_set_connection utf8 character_set_database utf8 character_set_filesystem binary character_set_results utf8 character_set_server utf8 character_set_system utf8 character_sets_dir /rdsdbbin/mysql-5.1.50.R3/share/mysql/charsets/ Furthermore, I've successfully setup Heroku to use the RDS database. After rake db:migrate, everything looks good: CREATE TABLE `comments` ( `id` int(11) NOT NULL AUTO_INCREMENT, `commentable_id` int(11) DEFAULT NULL, `parent_id` int(11) DEFAULT NULL, `content` text COLLATE utf8_unicode_ci, `child_count` int(11) DEFAULT '0', `created_at` datetime DEFAULT NULL, `updated_at` datetime DEFAULT NULL, PRIMARY KEY (`id`), KEY `commentable_id` (`commentable_id`), KEY `index_comments_on_community_id` (`community_id`), KEY `parent_id` (`parent_id`) ) ENGINE=InnoDB AUTO_INCREMENT=4 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci; In the markup, I've included: <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> Also, I've set: production: encoding: utf8 collation: utf8_general_ci ...in the database.yml, though I'm not very confident that anything is being done to honor any of those settings in this case, as Heroku seems to be doing its own config when connecting to RDS. Now, I enter a comment through the form in the app: "Úbe® ƒåiL", but in the database I've got "Úbe® Æ’Ã¥iL" It looks fine when Rails loads it back out of the database and it is rendered to the page, so whatever it is doing one way, it's undoing the other way. If I look at the RDS database in Sequel Pro, it looks fine if I set the encoding to "UTF-8 Unicode via Latin 1". So it seems Latin-1 is sneaking in there somewhere. Somebody must have done this before, right? What am I missing?

    Read the article

  • Conversion between different template instantiation of the same template

    - by Naveen
    I am trying to write an operator which converts between the differnt types of the same implementation. This is the sample code: template <class T = int> class A { public: A() : m_a(0){} template <class U> operator A<U>() { A<U> u; u.m_a = m_a; return u; } private: int m_a; }; int main(void) { A<int> a; A<double> b = a; return 0; } However, it gives the following error for line u.m_a = m_a;. Error 2 error C2248: 'A::m_a' : cannot access private member declared in class 'A' d:\VC++\Vs8Console\Vs8Console\Vs8Console.cpp 30 Vs8Console I understand the error is because A<U> is a totally different type from A<T>. Is there any simple way of solving this (may be using a friend?) other than providing setter and getter methods? I am using Visual studio 2008 if it matters.

    Read the article

  • Generating Fibonacci Numbers Using variable-Length Arrays Code Compiler Error.

    - by Nano HE
    Compile error in vs2010(Win32 Console Application Template) for the code below. How can I fix it. unsigned long long int Fibonacci[numFibs]; // error occurred here error C2057: expected constant expression error C2466: cannot allocate an array of constant size 0 error C2133: 'Fibonacci' : unknown size Complete code attached(It's a sample code from programming In c -3E book. No any modify) int main() { int i, numFibs; printf("How may Fibonacci numbers do you want (between 1 to 75)? "); scanf("%i", &numFibs); if ( numFibs < 1 || numFibs > 75){ printf("Bad number, sorry!\n"); return 1; } unsigned long long int Fibonacci[numFibs]; Fibonacci[0] = 0; // by definition Fibonacci[1] = 1; // ditto for ( i = 2; i < numFibs; ++i) Fibonacci[i] = Fibonacci[i-2] + Fibonacci[i-1]; for ( i = 0; i < numFibs; ++i) printf("%11u",Fibonacci[i]); printf("\n"); return 0; }

    Read the article

  • GWT + Seam, cannot fetch scoped beans from gwt servlet in seam resource servlet.

    - by David Göransson
    Hello all I am trying to get session and conversation scoped beans to a gwt servlet in the seam resource servlet. I have a conversation scoped bean: @Name ("viewFormCopyAction") @Scope (ScopeType.CONVERSATION) public class ViewFormCopyAction {} and a session scoped bean: @Name ("authenticator") @Scope (ScopeType.SESSION) public class AuthenticatorAction {} There is a RemoteService interface: @RemoteServiceRelativePath ("strokesService") public interface StrokesService extends RemoteService { public Position getPosition (int conversationId); } with corresponding async interface: public interface StrokesServiceAsync extends RemoteService { public void getPosition (int conversationId, AsyncCallback callback); } and implementation: @Name ("com.web.actions.forms.gwt.client.StrokesService") @Scope (ScopeType.EVENT) public class StrokesServiceImpl implements StrokesService { @In Manager manager; @Override @WebRemote public Position getPosition (int conversationId) { manager.switchConversation( "" + conversationId ); ViewFormCopyAction vfca = (ViewFormCopyAction) Component.getInstance( "viewFormCopyAction" ); AuthenticatorAction aa = (AuthenticatorAction) Component.getInstance( "authenticator" ); return null; } } The gwt page is within an IFrame in a regular seam page and the conversationId is propagted with the src attribute of the IFrame. Both bean objects end up with only null values. Can anyone see anything wrong with the code? I know that I could use strings instead of the int, but never mind that at this point.

    Read the article

  • Mysql slow query: INNER JOIN + ORDER BY causes filesort

    - by Alexander
    Hello! I'm trying to optimize this query: SELECT `posts`.* FROM `posts` INNER JOIN `posts_tags` ON `posts`.id = `posts_tags`.post_id WHERE (((`posts_tags`.tag_id = 1))) ORDER BY posts.created_at DESC; The size of tables is 38k rows, and 31k and mysql uses "filesort" so it gets pretty slow. I tried to use different indexes, no luck. CREATE TABLE `posts` ( `id` int(11) NOT NULL auto_increment, `created_at` datetime default NULL, PRIMARY KEY (`id`), KEY `index_posts_on_created_at` (`created_at`), KEY `for_tags` (`trashed`,`published`,`clan_private`,`created_at`) ) ENGINE=InnoDB AUTO_INCREMENT=44390 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci CREATE TABLE `posts_tags` ( `id` int(11) NOT NULL auto_increment, `post_id` int(11) default NULL, `tag_id` int(11) default NULL, `created_at` datetime default NULL, `updated_at` datetime default NULL, PRIMARY KEY (`id`), KEY `index_posts_tags_on_post_id_and_tag_id` (`post_id`,`tag_id`) ) ENGINE=InnoDB AUTO_INCREMENT=63175 DEFAULT CHARSET=utf8 +----+-------------+------------+--------+--------------------------+--------------------------+---------+---------------------+-------+-----------------------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+------------+--------+--------------------------+--------------------------+---------+---------------------+-------+-----------------------------------------------------------+ | 1 | SIMPLE | posts_tags | index | index_post_id_and_tag_id | index_post_id_and_tag_id | 10 | NULL | 24159 | Using where; Using index; Using temporary; Using filesort | | 1 | SIMPLE | posts | eq_ref | PRIMARY | PRIMARY | 4 | .posts_tags.post_id | 1 | | +----+-------------+------------+--------+--------------------------+--------------------------+---------+---------------------+-------+-----------------------------------------------------------+ 2 rows in set (0.00 sec) What kind of index I need to define to avoid mysql using filesort? Is it possible when order field is not in where clause?

    Read the article

  • The correct usage of nested #pragma omp for directives

    - by GoldenLee
    The following code runs like a charm before OpenMP parallelization was applied. In fact, the following code was in a state of endless loop! I'm sure that's result from my incorrect use to the OpenMP directives. Would you please show me the correct way? Thank you very much. #pragma omp parallel for for (int nY = nYTop; nY <= nYBottom; nY++) { for (int nX = nXLeft; nX <= nXRight; nX++) { // Use look-up table for performance dLon = theApp.m_LonLatLUT.LonGrid()[nY][nX] + m_FavoriteSVISSRParams.m_dNadirLon; dLat = theApp.m_LonLatLUT.LatGrid()[nY][nX]; // If you don't want to use longitude/latitude look-up table, uncomment the following line //NOMGeoLocate.XYToGEO(dLon, dLat, nX, nY); if (dLon > 180 || dLat > 180) { continue; } if (Navigation.GeoToXY(dX, dY, dLon, dLat, 0) > 0) { continue; } // Skip void data scanline dY = dY - nScanlineOffset; // Compute coefficients as well as its four neighboring points' values nX1 = int(dX); nX2 = nX1 + 1; nY1 = int(dY); nY2 = nY1 + 1; dCx = dX - nX1; dCy = dY - nY1; dP1 = pIRChannelData->operator [](nY1)[nX1]; dP2 = pIRChannelData->operator [](nY1)[nX2]; dP3 = pIRChannelData->operator [](nY2)[nX1]; dP4 = pIRChannelData->operator [](nY2)[nX2]; // Bilinear interpolation usNomDataBlock[nY][nX] = (unsigned short)BilinearInterpolation(dCx, dCy, dP1, dP2, dP3, dP4); } }

    Read the article

  • why malloc+memset slower than calloc?

    - by kingkai
    It's known that calloc differentiates itself with malloc in which it initializes the memory alloted. With calloc, the memory is set to zero. With malloc, the memory is not cleared. So in everyday work, i regard calloc as malloc+memset. Incidentally, for fun, i wrote the following codes for benchmark. The result is confused. Code 1: #include<stdio.h> #include<stdlib.h> #define BLOCK_SIZE 1024*1024*256 int main() { int i=0; char *buf[10]; while(i<10) { buf[i] = (char*)calloc(1,BLOCK_SIZE); i++; } } time ./a.out real 0m0.287s user 0m0.095s sys 0m0.192s Code 2: #include<stdio.h> #include<stdlib.h> #include<string.h> #define BLOCK_SIZE 1024*1024*256 int main() { int i=0; char *buf[10]; while(i<10) { buf[i] = (char*)malloc(BLOCK_SIZE); memset(buf[i],'\0',BLOCK_SIZE); i++; } } time ./a.out real 0m2.693s user 0m0.973s sys 0m1.721s Repalce memset with bzero(buf[i],BLOCK_SIZE) in Code 2 produce the result alike. My Question is that why malloc+memset is so much slower than calloc? How can calloc do that ? Thanks!

    Read the article

< Previous Page | 233 234 235 236 237 238 239 240 241 242 243 244  | Next Page >