Search Results

Search found 2128 results on 86 pages for 'suzy fresh'.

Page 75/86 | < Previous Page | 71 72 73 74 75 76 77 78 79 80 81 82  | Next Page >

  • Database source control with Oracle

    - by borjab
    I have been looking during hours for a way to check in a database into source control. My first idea was a program for calculating database diffs and ask all the developers to imlement their changes as new diff scripts. Now, I find that if I can dump a database into a file I cound check it in and use it as just antother type of file. The main conditions are: Works for Oracle 9R2 Human readable so we can use diff to see the diferences. (.dmp files doesn't seem readable) All tables in a batch. We have more than 200 tables. It stores BOTH STRUCTURE AND DATA It supports CLOB and RAW Types. It stores Procedures, Packages and its bodies, functions, tables, views, indexes, contraints, Secuences and synonims. It can be turned into an executable script to rebuild the database into a clean machine. Not limitated to really small databases (Supports least 200.000 rows) It is not easy. I have downloaded a lot of demos that does fail in one way or another. EDIT: I wouldn't mind alternatives aproaches provided that they allows us to check a working system against our release DATABASE STRUCTURE AND OBJECTS + DATA in a bath mode. By the way. Our project has been developed for years. Some aproaches can be easily implemented when you make a fresh start but seem hard at this point. EDIT: To understand better the problem let's say that some users can sometimes do changes to the config data in the production eviroment. Or developers might create a new field or alter a view without notice in the realease branch. I need to be aware of this changes or it will be complicated to merge the changes into production.

    Read the article

  • local vs core contoller

    - by latvian
    Hi, I am adding new column and action in the local admin app/code/local/Mage/Adminhtml/Block/Catalog/Product/Grid.php which works fine, however. The local controller/app/code/local/Mage/Adminhtml/Block/Catalog/Product.php is not being used or is not overloading the admin one /app/code/core/Mage/Adminhtml/Block/Catalog/Product.php. This is almost fresh install of Magento 1.4.0.1. I am the only one working, so i know it is not overloaded by some custom controller. I have disabled all custom modules. I have rolled back most of my changes. I have checked /etc/Modules/Mage_Catalog.xml. Refreshed cache all possible ways, loged in and out. Nothing....still using the core contoller copy. why? How do you troubleshoot, meaning, at what moment magento decides using between core or local copies? ...its even more strange because it does not parse local Adminhtml config.xml but uses local Adminthml copy of Blocks. Any pointer would help. I would like to keep everything in local code. Thank You, Margots

    Read the article

  • kQuery : Trying to find a specific instance of a class by number

    - by mattelliottIT
    I guess I have just hit a mental block with this one; maybe some fresh eyes will help. Basically I have a few instance of the class "menu-item" which when clicked call the click function via jQuery and bring up a video. Instead of giving each on an id as well as a class I am trying to find which instance of the class was clicked (1, 2, 3, etc). Just can't seem to get it though. //click listener for menu-items $('.menu-item').click(function(event) { var o = $('.menu-item'); var count = o.length(); // switch(count) { case 0 : filename == 'letters'; break; case 1 : filename == 'the-gift'; break; } var videoPlayer = '<video controls width="618px">'; videoPlayer += '<source src="_video/' + filename + '.mp4" />'; videoPlayer += '</video>'; //place video $('#videoCont').html(videoPlayer); }); I'm trying to create an array there where each instance of the 'menu-item' is one array item. (btw, for now I am just proofing this with an mp4 filetype before I add in the ogv and webm formats). Thanks for any and all help!

    Read the article

  • Cannot bind to IPv6 address

    - by ereOn
    I am facing a strange problem on my Ubuntu Karmic system. When I call getaddrinfo() with AI_PASSIVE and AF_UNSPEC, for an empty host and the UDP 12000 port to get a bindable address, I only get back one IPv4 result (0.0.0.0:12000 for instance). If I change my call and specify AF_INET6 instead of AF_UNSPEC, then getaddrinfo() returns "Name or service not known". Shouldn't I get [::]:12000 as a result ? The same thing happens if I set the host to ::1. When I call getaddrinfo() without AI_PASSIVE (to get a "connectable" address) for the host "localhost" and the UDP 12000 port, I first get [::1]:12000 then 127.0.0.1:12000. So apparently, my system is IPv6 ready (I can ping to both IPv4 and IPv6 addresses, as well as DNS resolution). But how is it that I can't get an IPv6 address to bind to with getaddrinfo() ? Do you guys have any idea about what could be wrong ? My OS is Ubuntu Karmic, fresh install without any networking tweaking. Thank you.

    Read the article

  • Vectorize matrix operation in R

    - by Fernando
    I have a R x C matrix filled to the k-th row and empty below this row. What i need to do is to fill the remaining rows. In order to do this, i have a function that takes 2 entire rows as arguments, do some calculations and output 2 fresh rows (these outputs will fill the matrix). I have a list of all 'pairs' of rows to be processed, but my for loop is not helping performance: # M is the matrix # nrow(M) and k are even, so nLeft is even M = matrix(1:48, ncol = 3) # half to fill k = nrow(M)/2 # simulate empty rows to be filled M[-(1:k), ] = 0 cat('before fill') print(M) # number of empty rows to fill nLeft = nrow(M) - k nextRow = k + 1 # list of rows to process (could be any order of non-empty rows) idxList = matrix(1:k, ncol = 2) for ( i in 1 : (nLeft / 2)) { row1 = M[idxList[i, 1],] row2 = M[idxList[i, 2],] # the two columns in 'results' will become 2 rows in M # fake result, return 2*row1 and 3*row2 results = matrix(c(2*row1, 3*row2), ncol = 2) # fill the matrix M[nextRow, ] = results[, 1] nextRow = nextRow + 1 M[nextRow, ] = results[, 2] nextRow = nextRow + 1 } cat('after fill') print(M) I tried to vectorize this, but failed... appreciate any help on improving this code, thanks!

    Read the article

  • iOS OpenGL ES 1.1 jerky animation using CADisplayLink (reboot fixes for a while)

    - by timthecoder
    I'm using OpenGL ES 1.1 and CADisplayLink to animate a 3d scene. If the iOS device has been rebooted fairly recently, the animation is smooth and the time delta between two displayLink.timestamp calls is fairly even. But after a few hours or days of the iOS device being used and my app is sometimes run a few times, the animation becomes jerky and the time deltas ramp up and then reset to a lower value only to ramp up again. Like this: 2012-09-01 23:42:58.770 [2678:707] dt= 0.021139 2012-09-01 23:42:58.787 [2678:707] dt= 0.022183 2012-09-01 23:42:58.804 [2678:707] dt= 0.023223 2012-09-01 23:42:58.820 [2678:707] dt= 0.024270 2012-09-01 23:42:58.837 [2678:707] dt= 0.009679 2012-09-01 23:42:58.853 [2678:707] dt= 0.010750 2012-09-01 23:42:58.870 [2678:707] dt= 0.011766 2012-09-01 23:42:58.887 [2678:707] dt= 0.012806 2012-09-01 23:42:58.903 [2678:707] dt= 0.013847 2012-09-01 23:42:58.920 [2678:707] dt= 0.014890 2012-09-01 23:42:58.937 [2678:707] dt= 0.015933 2012-09-01 23:42:58.953 [2678:707] dt= 0.016976 2012-09-01 23:42:58.970 [2678:707] dt= 0.018011 2012-09-01 23:42:58.987 [2678:707] dt= 0.019055 2012-09-01 23:42:59.003 [2678:707] dt= 0.020097 2012-09-01 23:42:59.020 [2678:707] dt= 0.021143 2012-09-01 23:42:59.037 [2678:707] dt= 0.022181 2012-09-01 23:42:59.054 [2678:707] dt= 0.023222 2012-09-01 23:42:59.071 [2678:707] dt= 0.024288 2012-09-01 23:42:59.087 [2678:707] dt= 0.009624 2012-09-01 23:42:59.103 [2678:707] dt= 0.010728 2012-09-01 23:42:59.121 [2678:707] dt= 0.011763 2012-09-01 23:42:59.137 [2678:707] dt= 0.012808 2012-09-01 23:42:59.153 [2678:707] dt= 0.013847 2012-09-01 23:42:59.170 [2678:707] dt= 0.014891 2012-09-01 23:42:59.187 [2678:707] dt= 0.016002 2012-09-01 23:42:59.203 [2678:707] dt= 0.016979 2012-09-01 23:42:59.220 [2678:707] dt= 0.018016 2012-09-01 23:42:59.237 [2678:707] dt= 0.019042 2012-09-01 23:42:59.253 [2678:707] dt= 0.020099 2012-09-01 23:42:59.270 [2678:707] dt= 0.021138 2012-09-01 23:42:59.287 [2678:707] dt= 0.022185 2012-09-01 23:42:59.304 [2678:707] dt= 0.023222 2012-09-01 23:42:59.320 [2678:707] dt= 0.024265 2012-09-01 23:42:59.337 [2678:707] dt= 0.009681 2012-09-01 23:42:59.354 [2678:707] dt= 0.010736 And then if the iOS device is rebooted the animation is smooth again. The problem even occurs on my menu screen when almost no game related calculations are going on in the UpdateAnimation() function. I don't understand what is going on and why a fresh reboot will always fix this problem for a while.

    Read the article

  • g++ fails mysteriously only if a .h is in a certain directory

    - by ggambett
    I'm experiencing an extremely weird problem in a fresh OSX 10.4.11 + Xcode 2.5 installation. I've reduced it to a minimal test case. Here's test.cpp: #include "macros.h" int main (void) { return 1; } And here's macros.h: #ifndef __JUST_TESTING__ #define __JUST_TESTING__ template<typename T> void swap (T& pT1, T& pT2) { T pTmp = pT1; pT1 = pT2; pT2 = pTmp; } #endif //__JUST_TESTING__ This compiles and works just fine if both files are in the same directory. HOWEVER, if I put macros.h in /usr/include/gfc2 (it's part of a custom library I use) and change the #include in test.cpp, compilation fails with this error : /usr/include/gfc2/macros.h:4: error: template with C linkage I researched that error and most of the comments point to a "dangling extern C", which doesn't seem to be the case at all. I'm at a complete loss here. Is g++ for some reason assuming everything in /usr/include/gfc2 is C even though it's included from a .cpp file that doesn't say extern "C" anywhere? Any ideas? EDIT : It does compile if I use the full path in the #include, ie #include "/usr/include/gfc2/macros.h" EDIT2 : It's not including the wrong header. I've verified this using cpp, g++ -E, and renaming macros.h to foobarmacros.h

    Read the article

  • CI + Joomla 1.5

    - by DMin
    Hi, This is something that I just cooked up with Joomla and CodeIgniter(CI). I Wrote my database intensive application in CodeIgniter and frontend is Joomla. I'm using Jumi(Joomla Extention) so I can include the CI files inside joomla to basically insert the content generated by CI into Joomla articles. Problem is, you can't include CI files directly using JUMI from joomla because CI tends to route the pages so instead of seeing your joomla page with the CI content, you be redirected to the CI page itself. I did a little work around for this : Made an additional page that just basically does cURL to the CI page - gets the data and echos it out. From jumi, I include this cURL page instead. Couple of questions: I've seen at least a few posts that CI + Joomla is difficult to do(link). 1) Do you see any glaring security issues or possible performance issues? 2) Do you know of a better way to implement this? 3) What do you think of this? Do you think this is a good way to do this? There is one component out there that plugs CI with Joomla but it requires you to have a fresh CI install. It allows only one controller & the download link is down as well.

    Read the article

  • Is there any way to access codeigniter language and config properties from included javascript files

    - by ubermensch
    Good morning! I'm having great success so far with CodeIgniter. I'm new to PHP and web development in general, but I feel that CodeIgniter is giving me a leg up while I catch up on the basics. My question for today is this - I have been happily loading config and lang values from my views for a while now, and everything is working fine. But what about JavaScript files being linked into my views? Is there any way to make the $this-lang-line and $this-config-item function references available to me in my JavaScript files? I am implementing jQuery client-side validation, and would like to pull in my error messages from the server, both to support internationalisation and to make sure that validation gracefully degrades if JavaScript is not available, in that the error messages pushed back into the view from the server-side validation are identical to those displayed dynamically by the jQuery validation. I would not like to have to keep coming back to make sure that these strings are kept in sync. As for internationalisation, I'm fresh out of ideas on how to support that if it turns out that lang and config item strings are completely unavailable from my JS files. Any help you can provide would be greatly appreciated! :)

    Read the article

  • PHP variable equals true no matter what the value, even 0

    - by kaigoh
    This is the var_dump: object(stdClass)#27 (8) { ["SETTING_ID"]=> string(2) "25" ["SETTING_SOURCE"]=> string(2) "XV" ["SETTING_FLEET"]=> string(3) "313" ["SETTING_EXAM"]=> string(1) "A" ["SETTING_HIDE"]=> string(1) "0" ["SETTING_THRESHOLD"]=> string(1) "0" ["SETTING_COUNT"]=> string(8) "POSITIVE" ["SETTING_USAGE"]=> string(7) "MILEAGE" } The variable I am testing is SETTING_HIDE. This is being pulled from MySQL using the Code igniter framework. I don't know if it is just me being thick after a rather long day at work or what, but no matter what value the variable holds, any if statement made against it returns true, even when typecast as a boolean or a direct string comparison, ie. == "0" or == "1". Anyone with a fresh pair of eyes care to make me feel silly!?! :) Just to clarify: Have tried the following: if($examSetting->SETTING_HIDE == "1") { $showOnABC = "checked=\"checked\""; } if((bool)$examSetting->SETTING_HIDE) { $showOnABC = "checked=\"checked\""; } if($examSetting->SETTING_COUNT == "POSITIVE") further on in my code works perfectly.

    Read the article

  • [RPM Building] How to take user input during install

    - by Sam
    So when I create a debian package, I am able to write a post-installation shell script that runs just fine. Currently mine is configured to do echo "Please enter your MySQL Database user (default root)" read MYSQL_USER echo "Please enter the MySQL Database user password (default root)" read -s MYSQL_PASS DBEXIST=0 CMD="create database lportal;use lportal;" (mysql -u$MYSQL_USER -p$MYSQL_PASS -e "$CMD") || ((DBEXIST++)) if [ $DBEXIST -ne 0 ]; then echo "Setup finished, but MySQL already has an lportal table. This could be from a previous installation of Liferay. If you want a fresh installation of this bundle, please remove the lportal table and reinstall this package." fi This works fine for Ubuntu. However, I can't seem to get user input to work with RPMs for Fedora. Is there a good way to take user input? From what I understand, RPMs were designed not to allow interactive installs. However I can't see a better way to do this.. Is there possibly a way to automatically find local MySQL settings without asking the user? Otherwise, what's the best way to ask for user input?

    Read the article

  • Why can’t I create a database in an empty ASP MVC 2 project using Project->Add->New Item->SQL Server

    - by Dr Dork
    I'm diving head first into ASP MVC and am playing around with creating and manipulating a database. I did a search and found this tutorial for creating a database, however when I follow it, I get this error right at the start when trying to add a new database to my fresh, empty ASP MVC 2 project... A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 26 - Error Locating Server/Instance Specified) The only requirement the tutorial mentioned was SQL Server Express, but when I went to download it, it said it was already installed. I'm assuming it was part of the VS 2010 RC I installed and am running. So I don't know what else I need if I am missing something. This is all new to me, so I'm sure I'm missing something obvious here and after I'm done posting this question, I plan to do some more research into the topic of databases and how they work with ASP MVC. In the meantime, I was you could help me answer a couple high level questions... What am I missing/forgetting to do that is causing this error? Any suggestions for good resources/tutorials that focus on using databases with ASP MVC? I've done a lot of database programming in the past, so I'm familiar with the concepts of relational databases and the SQL language. I wish I could find a good resource for learning how to work with them in an ASP dev environment, as well as a good breakdown of all the related technologies used for working with them (i.e. LINQ to SQL). Thanks so much in advance for all your help! I'm going to start researching these questions right now.

    Read the article

  • Jquery failure after site went live

    - by Brandon Condrey
    I have been designing a site for weeks using JQuery. I don't have a local server or a testing server so I just created a directory through FTP, '/testing'. Everything was working great in the testing directory. I attempted to go live tonight by moving all the files in '/testing' to the root directory and I changed all file paths and script sources accordingly. The site loads, but everything related to JQuery is non-functional. Javascript console gives errors of (just as an example from a plugin): '$.os.name' is not a function I'm at loss for what to do. I changed the paths referencing the JQuery library, installed a fresh copy of JQuery (to a new directory), etc. There is a wordpress installation in a different directory '/blog'. I've read about some compatibility issues with wordpress, but that seems to be related to using JQuery inside wordpress, which I am not. I'm not sure if any code would be beneficial since it was all functional in a different directory. Your help is greatly appreciated.

    Read the article

  • What happens to exinsting workspaces after upgrading to TFS 2010

    - by user351671
    Hi, I was looking for some insight about what happens to existing workspaces and files that are already checked-out on people, after an upgrade to TFS2010. Surprisingly enough I can not find any satisfactory information on this. (I am talking about upgrading on new hardware by the way. Fresh TFS instance, upgraded databases) I've checked TFS Installation guide, I searched through the web, all I could find is upgrade scenarios for the server side. Nobody even mentions what happens to source control clients. I've created a virtual machine to test the upgrade process, The upgrade was successful and all my files and workspaces exist in the new server too. The problem is: The new TFS installation has a new instanceID. When I redirected on the clients to the new server, the client seemed unable to match files and file states in the workspace with the ones on the new server. This makes me wonder if it will be possible to keep working after the production upgrade. As I mentioned above I can not find anything on this, it would be great if anyone could point me to some paper or blog post about this. Thanks in advance...

    Read the article

  • Weird Rails database errors

    - by Jason Swett
    I've had some trouble getting my Rails app to connect to PostgreSQL so I decided to just say screw it and use SQLite for now. (I'm using the tutorial here: http://guides.rubyonrails.org/getting_started.html) I started a BRAND NEW, fresh Rails app from this tutorial. When I visit my app in the browser after deleting public/index.html, I get this the first time: Please install the pg adapter: `gem install activerecord-pg-adapter` (no such file to load -- active_record/connection_adapters/pg_adapter) That's odd to me because I'm not mentioning PostgreSQL anywhere. Here's my databases.yml: # SQLite version 3.x # gem install sqlite3-ruby (not necessary on OS X Leopard) development: adapter: sqlite3 database: db/development.sqlite3 pool: 5 timeout: 5000 # Warning: The database defined as "test" will be erased and # re-generated from your development database when you run "rake". # Do not set this db to the same as development or production. test: adapter: sqlite3 database: db/test.sqlite3 pool: 5 timeout: 5000 production: adapter: sqlite3 database: db/production.sqlite3 pool: 5 timeout: 5000 To make things more confusing, I only get that "pg adapter" error on the first load. For every subsequent page request, I get this error: ActiveRecord::ConnectionNotEstablished So even though I removed all mention of PostgreSQL, I'm still getting errors. What could be going on?

    Read the article

  • "This program might not have installed correctly" message in Windows 7 RC

    - by kliu
    I have an installer that works perfectly under NT 5.x, Vista, and Windows 7. It contains the proper manifest for UAC on NT 6.x. But starting with Windows 7 RC, every time the setup program closes, Windows produces an erroneous "This program might not have installed correctly" message, even though the program did install correctly with no problems whatsoever. I never got these spurious messages in Vista or in Windows 7 beta. I sent a bug report to Microsoft, but have not heard back. I thought that this might just be a glitch in the Windows 7 RC, but the problem is still there on a fresh install of one of the very recent RTM-escrow builds that was leaked. Microsoft has no documentation whatsoever about this--not even a hint to what might possibly be triggering it. Even more frustrating is that I get this "This program might not have installed correctly" message even if I cancel the install on the very first are-you-sure-you-want-to-proceed screen before any of the installation code (creating a temp dir, extracting files, copying, registry, etc.) is ever run. Has anyone figured this one out?

    Read the article

  • Windows Unique Identifier?

    - by user775013
    So there is this software. When installed it somehow (probably reads file or registry entry) recognizes my windows operating system. It's supposed to do some tasks only once per unique computer. If I uninstall the program and re install it, the software remembers that it has been installed and therefore do not do the task. If I use system restore, software also does not do the tasks. If I load image of the system before the install, software also doesn't do the tasks. If I re install a fresh copy of windows, then only the software does the task. IP even does not matter. Everything is the same, except it is a brand new copy of Windows operating system. So I guess that the software reads some kind of unique operating system identifier, then connects to server to create a user profile. So the question is? What could be those files which software uses to check system identifier? So far I have found out that there are entries under registry. WindowsNT/CurrentVersion and Windows/Cryptography but software do not rely on them. Where else should I search? Any software which could help me find out?

    Read the article

  • Triangulation in 3D Space

    - by w3b_wizzard
    Disclaimer: This is for class, however I'm fresh out of ideas and a nudge in the right direction would be much appreciated. Also, this needs to be implemented in raw C, so no fancy libraries can be used. I have to write a search and rescue simulator for submarines, it has to find a probe that is randomly placed in 3D space in a grid from of the MAX_XYZ (100000). The only tools I'm given are a "ping" which will give the magnitude of the distance between a certain sub and the probe. The goal is to optimize the costs of this entire operation so a brute force attempt, like looking at every single coordinate, won't work. Hence I was thinking triangulation. Now, it makes loads of sense to me, place three subs, each one of them uses their ping to get the distance between them and the probe. Since each sub have a known distance relative to one another, it's easy to build the base of a tetrahedron with them, and the results of the ping will point to a certain coordinate, the problem I'm having is how to figure out the elevation, or the height, of the tetrahedron. So what I have as data is the following: Distances between subs (In vector format) Angles between each subs (very easy to compute) Distance between each sub and the probe (3 segments from the base to the peak) Angles inside each of the outer 3 surfaces of the tetrahedron. I tried finding some sort of relationship with the vertices of the tetrahedron and the relative angles in each of them, however all I found had to deal with tetrahedrons built with equilateral triangles, which isn't much help. I have the impression this can be easily solved with trig but either I'm not seeing it or I need more coffee. Any suggestions would be appreciated!

    Read the article

  • How do you get the solution directory in C# (VS 2008) in code?

    - by IsaacB
    Hi, Got an annoying problem here. I've got an NHibernate/Forms application I'm working through SVN. I made some of my own controls, but when I drag and drop those (or view some form editors where I have already dragged and dropped) onto some of my other controls, Visual studio decides it needs to execute some of the code I wrote, including the part that looks for hibernate.cfg.xml. I have no idea why this is, but (sometimes!) when it executes the code during my form load or drag and drop it switches the current directory to C:\program files\vs 9.0\common7\ide, and then nhibernate throws an exception that it can't find hibernate.cfg.xml, because I'm searching for that in a relative path. Now, I don't want to hard code the location of hibernate.cfg.xml, or just copy hibernate.cfg.xml to the ide directory (which will work). I want a solution that gets the solutions directory while the current directory is common7\ide. Something that will let someone view my forms in the designer on a fresh checkout to an arbitrary directory on an arbitrary machine. And no, I'm not about to load the controls in code. I have so many controls within controls that it is a nightmare to line everything up without it. I tried a pre build event that made a file that has the solution directory in it, but of course how can I find that from common7\ide? All the projects files need to be in the solution directory because of svn. Thanks for your help guys, I've already spent a few hours fiddling with this in vain.

    Read the article

  • What does Postgres do when BEGIN is run on a connection in autocommit mode?

    - by DNS
    I'm trying to better understand the concept of 'autocommit' when working with a Postgres (psycopg) connection. Let's say I have a fresh connection, set its isolation level to ISOLATION_LEVEL_AUTOCOMMIT, then run this SQL directly, without using the cursor begin/rollback methods (as an exercise; not saying I actually want to do this): INSERT A INSERT B BEGIN INSERT C INSERT D ROLLBACK What happens to INSERTs C & D? Is autocommit is purely an internal setting in psycopg that affects how it issues BEGINs? In that case, the above SQL is unafected; INSERTs A & B are committed as soon as they're done, while C & D are run in a transaction and rolled back. What isolation level is that transaction run under? Or is autocommit a real setting on the connection itself? In that case, how does it affect the handling of BEGIN? Is it ignored, or does it override the autocommit setting to actually start a transaction? What isolation level is that transaction run under? Or am I completely off-target?

    Read the article

  • Operator Overloading << in c++

    - by thlgood
    I'm a fresh man in C++. I write this simple program to practice Overlaoding. This is my code: #include <iostream> #include <string> using namespace std; class sex_t { private: char __sex__; public: sex_t(char sex_v = 'M'):__sex__(sex_v) { if (sex_v != 'M' && sex_v != 'F') { cerr << "Sex type error!" << sex_v << endl; __sex__ = 'M'; } } const ostream& operator << (const ostream& stream) { if (__sex__ == 'M') cout << "Male"; else cout << "Female"; return stream; } }; int main(int argc, char *argv[]) { sex_t me('M'); cout << me << endl; return 0; } When I compiler it, It print a lots of error message: The error message was in a mess. It's too hard for me to found useful message sex.cpp: ???‘int main(int, char**)’?: sex.cpp:32:10: ??: ‘operator<<’?‘std::cout << me’????? sex.cpp:32:10: ??: ???: /usr/include/c++/4.6/ostream:110:7: ??: std::basic_ostream<_CharT, _Traits>::__ostream_type& std::basic_ostream<_CharT, _Traits>::operator<<(std::basic_ostre

    Read the article

  • Prevent cached objects to end up in the database with Entity Framework

    - by Dirk Boer
    We have an ASP.NET project with Entity Framework and SQL Azure. A big part of our data only needs to be updated a few times a day, other data is very volatile. The data that barely changes we cache in memory at startup, detach from the context and than use it mainly for reading, drastically lowering the amount of database requests we have to do. The volatile data is requested everytime by a DbContext per Http request. When we do an update to the cached data, we send a message to all instances to catch a fresh version of all the data from the SQL server. So far, so good. Until we introduced a bug that linked one of these 'cached' objects to the 'volatile' data, and did a SaveChanges. Well, that was quite a mess. The whole data tree was added again and again by every update, corrupting the whole database with a whole lot of duplicated data. As a complete hack I added a completely arbitrary column with a UniqueConstraint and some gibberish data on one of the root tables; hopefully failing the SaveChanges() next time we introduce such a bug because it will violate the Unique Constraint. But it is of course hacky, and I'm still pretty scared ;P Are there any better ways to prevent whole tree's of cached objects ending up in the database? More information Project is ASP.NET MVC I cache this data, because it is mainly read only, and this saves a tons of extra database calls per http request

    Read the article

  • Vista install works on one computer, but bluescreens another (on which Vista is known to work)

    - by Ken
    I hope my explanations make some sense -- please ask for clarification if they don't. I had a computer running Windows Vista (Ultimate, 64-bit). All was well! Then one day there was a nasty power surge at the office, and it died. (We didn't have surge protectors at the office, unfortunately. I assumed our lines were conditioned elsewhere, or was not an issue here. Oops.) After some testing, it was determined that the PSU, motherboard, and RAM were bad. While waiting for new hardware to arrive, I put my hard disk in a spare PC which had identical parts (mobo/CPU/RAM/PSU/video). Everything worked perfectly. The only way I could even tell it wasn't my computer is because Vista asked to re-activate itself with the new hardware, which worked fine, too. So the hard disk seems OK. Then the new parts arrived. The old motherboard model is no longer manufactured, so it's a new one with the same CPU/RAM/videocard/etc. slots. The PSU is also new, while the RAM I'm using is from the spare PC mentioned above. When I put it together and tried booting with my old hard disk, it starts to boot Windows, and then (fairly early in the process) gives a bluescreen and immediately reboots (so I can't see whatever the bluescreen is trying to tell me). I tried "safe mode", which also bluescreened. I tried booting the Vista DVD and running the repair utility, which found a Vista install, confirmed that it would not boot, and, eventually, declared that it was unable to repair it. I installed Vista fresh on a new hard disk, with the new mobo/etc., and it works perfectly. (That's what I'm running now.) I've also booted a Linux CD here, which ran great, and I've run Memtest86+ for a while, which found no errors. So all the hardware apart from the old hard disk seems OK, too. I don't think the problem is with my old Vista hard disk, since I used that with another mobo/CPU just fine. I don't think it's any other part of the new hardware, since I'm able to use it (and test it) with no trouble. It's just the combination of my old Vista install plus the new PC hardware that's not happy. I can get my data off my old hard disk and onto my new hard disk, and reinstall my apps, but it would be nice if I could fix things so I could continue to use my old hard disk as before. The latest hypothesis I've heard is that Vista had trouble with the new hardware (i.e., motherboard), but we have no idea what to do about that (except Safe Mode, which didn't work). Suggestions? Hypotheses for what's not right about this combination of Vista install and motherboard? Thanks!

    Read the article

  • Windows 7 extremely slow login, exchange performance, printer enumeration, etc...

    - by Jeff
    Background: I have a fresh copy of Windows 7 Professional x64 on a Dell Latitude E6500. The laptop has 8GB RAM, 250GB drive, and all Intel peripherals (net/wifi/graphics). All available Windows updates, as well as hardware drivers are installed. The IT folks where I work joined the computer to our Windows 2003-based Active Directory domain. There are no errors in any logs that we've looked at, and Group Policy templates appear to have applied properly. Problem: Every time I turn on or reboot the computer, it takes between 2 to 10 (all times are actual) minutes after successfully typing my username/password to get to my desktop. My login script does not always run. Sometimes I get a black screen, and a couple of minutes later the login script will pop up and take up to 10 minutes to complete. I can get around this by hitting cntrl-shift-esc and running explorer.exe from the Task Manager. The login script continues to hang, but I can minimize it and go on about my business. Either way, it generally throws errors prior to completing. I often get slow or failed connectivity to Exchange via Outlook. When I bring up printer dialogs, they take several minutes to populate, and block the calling app while doing so. Copies to SMB shares are very slow. On my home network, everything works fine. On both the work network and home network, I can use remote internet resources just fine. Web pages pull up, remote VPN's are fine, I can max out bandwidth on SpeakEasy Speed Test. I can get almost max bandwidth transferring FTP/HTTP over a LAN. Another symptom of the problem is that when I first log in, the work network shows as "Identifying" for a long time in the Network and Sharing Center, and will often then change to the name of the work domain, but say "Unauthenticated Network". Note that this computer previously ran Windows Vista with none of these problems. Attempts to Fix: Installed the Win7 admin pack Uninstalled/reinstalled all hardware drivers Verified Active Directory DNS settings (Vista works relatively well on the same network) Reset all TCP/IP settings on all adapters using the netsh commands to do so Disabled ipv6 on all adapters Disable wifi adapter while on work network Locked the network card to 100/Full, 1000/Full; also tried Auto Added various important addresses to hosts file (exchange, dns, ad) -- removed when didn't help My background is a jpeg (sounds unrelated but there is apparently a win7 login bug related to solid color background) More I have forgotten The IT staff at my company indicated they believe this is due to having Windows 2003 AD servers and not having any Windows 2008 R2 AD servers. Other than that, they have no advice or assistance to offer other than a rebuild (already tried that once with similar symptoms), or downgrade to Vista. Any thoughts out there?

    Read the article

  • No network upsets gnome

    - by Darren Cook
    An issue that has been bothering me for over a year now. My notebook, running ubuntu 10.04, is almost all the time using a wired connection, with static IP address. And a remote DNS server. Network is configured with entries in /etc/network/interfaces and /etc/resolv.conf, rather than whatever the gnome UI tool was (*) But if I'm out, or simply unplug the network cable, a few things get weird. Specifically the gnome-panel stops working - it is still there, but isn't updating. And opening a nautilus window (e.g. to look at files on the local disk) has huge time-outs. By that I mean it will not open the window for something like 30 or 60 seconds; but when it does finally open it I can see the files and it is perfectly usable. Everything else works fine, alt-tab between windows, etc. I use the commandline to find the pid of gnome-panel, kill it, wait a couple of seconds, and it opens up a fresh panel which is normally usable. (Something like 10 minutes later it will have locked/crashed again; the same for the nautilus windows.) I'm guessing this is a DNS issue? Would setting up a local DNS server help? Guess number 2 was related to having a file server mount (samba, though running on another linux box), and symbolic links to files and directories on that file server on my desktop. My question is a bit vague... Does anyone recognize these symptoms, and have a suggestion? Or do you have some troubleshooting suggestions for narrowing down the problem? My /etc/hosts: 127.0.0.1 localhost 127.0.1.1 myhost # The following lines are desirable for IPv6 capable hosts ::1 localhost ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters ff02::3 ip6-allhosts 127.0.0.1 testsite.local #Other test website URLs here UPDATE: Some timings to open some desktop folder icons. This is after pulling out the network cable. A sub-directory of the desktop took 23 secs to open up. Content appears immediately (just 8 files, it has no further subdirectories). The home directory icon took 12 seconds to open up, but then took about 30 seconds for the files to appear. I closed it and tried again. This time it took 18 seconds to open up, but then 70 seconds before anything appeared. *: I couldn't work out how to use the gnome network tool for my needs, which include 3-4 static IPs for testing virtual hosts locally.

    Read the article

< Previous Page | 71 72 73 74 75 76 77 78 79 80 81 82  | Next Page >