Search Results

Search found 9156 results on 367 pages for 'cloud storage'.

Page 334/367 | < Previous Page | 330 331 332 333 334 335 336 337 338 339 340 341  | Next Page >

  • Why does Fabric display the disconnect from server message for almost 2 minutes?

    - by Matthew Rankin
    Fabric displays Disconnecting from username@server... done. for almost 2 minutes prior to showing a new command prompt whenever I issue a fab command. This problem exists when using Fabric commands issued to both an internal server and a Rackspace cloud server. Below I've included the auth.log from the server, and I didn't see anything in the logs on my MacBook. Any thoughts as to what the problem is? Server's SSH auth.log with LogLevel VERBOSE Apr 21 13:30:52 qsandbox01 sshd[19503]: Accepted password for mrankin from 10.10.100.106 port 52854 ssh2 Apr 21 13:30:52 qsandbox01 sshd[19503]: pam_unix(sshd:session): session opened for user mrankin by (uid=0) Apr 21 13:30:52 qsandbox01 sudo: mrankin : TTY=unknown ; PWD=/home/mrankin ; USER=root ; COMMAND=/bin/bash -l -c apache2ctl graceful Apr 21 13:30:53 qsandbox01 sshd[19503]: pam_unix(sshd:session): session closed for user mrankin Server Configuration OS: Ubuntu 9.10 OpenSSH: Ubuntu package version 1.5.1p1-6ubuntu2 Client Configuration OS: Mac OS X 10.6.3 Fabric ver 0.9 Vritualenv ver 1.4.7 pip ver 0.7 Thoughts on Cause of the Issue I don't know how long the problem has existed. However, I know that at one point I didn't have this problem. Things that have changed since then are that I have recreated my virtualenv's using virtualenv 1.4.7, virtualenvwrapper 2.1, and pip 0.7. Not sure if this is related, but it is a thought since I run my fabfiles from within a virtualenv.

    Read the article

  • What objects would get defined in a Bridge scoring app (Javascript)

    - by Alex Mcp
    I'm writing a Bridge (card game) scoring application as practice in javascript, and am looking for suggestions on how to set up my objects. I'm pretty new to OO in general, and would love a survey of how and why people would structure a program to do this (the "survey" begets the CW mark. Additionally, I'll happily close this if it's out of range of typical SO discussion). The platform is going to be a web-app for webkit on the iPhone, so local storage is an option. My basic structure is like this: var Team = { player1: , player2: , vulnerable: , //this is a flag for whether or //not you've lost a game yet, affects scoring scoreAboveLine: , scoreBelowLine: gamesWon: }; var Game = { init: ,//function to get old scores and teams from DB currentBid: , score: , //function to accept whether bid was made and apply to Teams{} save: //auto-run that is called after each game to commit to DB }; So basically I'll instantiate two teams, and then run loops of game.currentBid() and game.score. Functionally the scoring is working fine, but I'm wondering if this is how someone else would choose to break down the scoring of Bridge, and if there are any oversights I've made with regards to OO-ness and how I've abstracted the game. Thanks!

    Read the article

  • Can you force a crash if a write occurs to a given memory location with finer than page granularity?

    - by Joseph Garvin
    I'm writing a program that for performance reasons uses shared memory (alternatives have been evaluated, and they are not fast enough for my task, so suggestions to not use it will be downvoted). In the shared memory region I am writing many structs of a fixed size. There is one program responsible for writing the structs into shared memory, and many clients that read from it. However, there is one member of each struct that clients need to write to (a reference count, which they will update atomically). All of the other members should be read only to the clients. Because clients need to change that one member, they can't map the shared memory region as read only. But they shouldn't be tinkering with the other members either, and since these programs are written in C++, memory corruption is possible. Ideally, it should be as difficult as possible for one client to crash another. I'm only worried about buggy clients, not malicious ones, so imperfect solutions are allowed. I can try to stop clients from overwriting by declaring the members in the header they use as const, but that won't prevent memory corruption (buffer overflows, bad casts, etc.) from overwriting. I can insert canaries, but then I have to constantly pay the cost of checking them. Instead of storing the reference count member directly, I could store a pointer to the actual data in a separate mapped write only page, while keeping the structs in read only mapped pages. This will work, the OS will force my application to crash if I try to write to the pointed to data, but indirect storage can be undesirable when trying to write lock free algorithms, because needing to follow another level of indirection can change whether something can be done atomically. Is there any way to mark smaller areas of memory such that writing them will cause your app to blow up? Some platforms have hardware watchpoints, and maybe I could activate one of those with inline assembly, but I'd be limited to only 4 at a time on 32-bit x86 and each one could only cover part of the struct because they're limited to 4 bytes. It'd also make my program painful to debug ;)

    Read the article

  • Avoid the problem with BigDecimal when migrating to Java 1.4 to Java 1.5+

    - by romaintaz
    Hello, I've recently migrated a Java 1.4 application to a Java 6 environment. Unfortunately, I encountered a problem with the BigDecimal storage in a Oracle database. To summarize, when I try to store a "7.65E+7" BigDecimal value (76,500,000.00) in the database, Oracle stores in reality the value of 7,650,000.00. This defect is due to the rewritting of the BigDecimal class in Java 1.5 (see here). In my code, the BigDecimal was created from a double using this kind of code: BigDecimal myBD = new BigDecimal("" + someDoubleValue); someObject.setAmount(myBD); // Now let Hibernate persists my object in DB... In more than 99% of the cases, everything works fine. Except that in really few case, the bug mentioned above occurs. And that's quite annoying. If I change the previous code to avoid the use of the String constructor of BigDecimal, then I do not encounter the bug in my uses cases: BigDecimal myBD = new BigDecimal(someDoubleValue); someObject.setAmount(myBD); // Now let Hibernate persists my object in DB... However, how can I be sure that this solution is the correct way to handle the use of BigDecimal? So my question is to know how I have to manage my BigDecimal values to avoid this issue: Do not use the new BigDecimal(String) constructor and use directly the new BigDecimal(double)? Force Oracle to use toPlainString() instead of toString() method when dealing with BigDecimal (and in this case how to do that)? Any other solution? Environment information: Java 1.6.0_14 Hibernate 2.1.8 (yes, it is a quite old version) Oracle JDBC 9.0.2.0 and also tested with 10.2.0.3.0 Oracle database 10.2.0.3.0

    Read the article

  • Bit shift and pointer oddities in C, looking for explanations

    - by foo
    Hi all, I discovered something odd that I can't explain. If someone here can see what or why this is happening I'd like to know. What I'm doing is taking an unsigned short containing 12 bits aligned high like this: 1111 1111 1111 0000 I then want to shif the bits so that each byte in the short hold 7bits with the MSB as a pad. The result on what's presented above should look like this: 0111 1111 0111 1100 What I have done is this: unsigned short buf = 0xfff; //align high buf <<= 4; buf >>= 1; *((char*)&buf) >>= 1; This gives me something like looks like it's correct but the result of the last shift leaves the bit set like this: 0111 1111 1111 1100 Very odd. If I use an unsigned char as a temporary storage and shift that then it works, like this: unsigned short buf = 0xfff; buf <<= 4; buf >>= 1; tmp = *((char*)&buf); *((char*)&buf) = tmp >> 1; The result of this is: 0111 1111 0111 1100 Any ideas what is going on here?

    Read the article

  • how to push different local git branches to heroku/master

    - by lsiden
    Heroku has a policy of ignoring all branches but 'master'. While I'm sure Heroku's designers have excellent reasons for for this policy (I'm guessing for storage and performance optimization), the consequence to me as a developer is that whatever local topic branch I may be working on, I would like an easy way to switch Heroku's master to that local topic branch and do a "git push heroku -f" to over-write master on Heroku. What I got from reading the "Pushing Refspecs" section of http://progit.org/book/ch9-5.html is git push -f heroku local-topic-branch:refs/heads/master What I'd really like is a way to set this up in the config file so that "git push heroku" always does the above, replacing local-topic-branch with the name of whatever my current branch happens to be. If anyone knows how to accomplish that, please let me know! The caveat for this, of course, is that this is only sensible if I am the only one who can push to that Heroku app/repository. A test or QA team might manage such a repository to try out different candidate branches, but they would have to coordinate so that they all agree on what branch they are pushing to it on any given day. Needless to say, it would also be a very good idea to have a separate remote repository (like Github) without this restriction for backing everything up to. I'd call that one "origin" and use "heroku" for Heroku so that "git push" always backs up everything to origin, and "git push heroku" pushes whatever branch I'm currently on to Heroku's master branch, overwriting it if necessary. Can anybody tell me if this would work? [remote "heroku"] url = [email protected]:my-app.git push = +refs/heads/*:refs/heads/master I'd like to hear from someone more experienced before I begin to experiment, although I suppose I could create a dummy app on Heroku and experiment with that. As for fetching, I don't really care if the Heroku repository is write-only. I still have a separate repository, like Github, for backup and cloning of all my work. Footnote: This question is similar to, but not quite the same as http://stackoverflow.com/questions/1489393/good-git-deployment-using-branches-strategy-with-heroku

    Read the article

  • rapid application developement tools for very basic GUI apps

    - by Jurij
    I know there are many RAD platforms out there. Infact there are so many that I'm having a hard time finding out which one fits me best. What I want is a RAD tool that would allow me to define a database data model (make DB tables) and then create (view and edit) forms for the various tables. Data input, updating and various queries should be easy and GUI should generate automatically. I'd like to add some additional functionality by coding (such as various complex calculations on the data). I'm a programmer so I'm willing to learn to use a more complete, full-blown RAD solution if you can point me to it (NetBeans and RubyOnRails being the two such frameworks that I'd would probably be high on the list). I'm currently doing Windows Forms logistics apps in .NET. I've actually developed a very crude and basic version of what I need, but I just know that there are solutions out there that are much better and I'd benefit by knowing how to use them. So in short, the basic requirements: * database based data storage (SQLite if possible) * very automated GUI creation * desktop based (as in: not a web app) * extendable by coding * used for creating simple data entry, view & query apps. So basically something like Oracle Forms or DotNetMushroom Rapid Application Developer. But for .NET and SQLite if possible.

    Read the article

  • How can I inject multiple repositories in a NServicebus message handler?

    - by Paco
    I use the following: public interface IRepository<T> { void Add(T entity); } public class Repository<T> { private readonly ISession session; public Repository(ISession session) { this.session = session; } public void Add(T entity) { session.Save(entity); } } public class SomeHandler : IHandleMessages<SomeMessage> { private readonly IRepository<EntityA> aRepository; private readonly IRepository<EntityB> bRepository; public SomeHandler(IRepository<EntityA> aRepository, IRepository<EntityB> bRepository) { this.aRepository = aRepository; this.bRepository = bRepository; } public void Handle(SomeMessage message) { aRepository.Add(new A(message.Property); bRepository.Add(new B(message.Property); } } public class MessageEndPoint : IConfigureThisEndpoint, AsA_Server, IWantCustomInitialization { public void Init() { ObjectFactory.Configure(config => { config.For<ISession>() .CacheBy(InstanceScope.ThreadLocal) .TheDefault.Is.ConstructedBy(ctx => ctx.GetInstance<ISessionFactory>().OpenSession()); config.ForRequestedType(typeof(IRepository<>)) .TheDefaultIsConcreteType(typeof(Repository<>)); } } My problem with the threadlocal storage is, is that the same session is used during the whole application thread. I discovered this when I saw the first level cache wasn't cleared. What I want is using a new session instance, before each call to IHandleMessages<.Handle. How can I do this with structuremap? Do I have to create a message module?

    Read the article

  • Getting an Android App to Show Up in the market for "Sony Internet TV"(Google TV)

    - by user1291659
    I'm having a bit of trouble getting my app to show up in the market under GoogleTV. I've searched google's official documentation and I don't believe the manifest lists any elements which would invalidate the program; the only hardware requirement specified is landscape mode, wakelock and external storage(neither which should cause it to be filtered for GTV according to the documentation) and I set the uses touchscreen elements "required" attribute to false. below is the AndroidManifest.xml for my project: <?xml version="1.0" encoding="utf-8"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="com.whateversoft" android:versionCode="2" android:versionName="0.1" > <uses-sdk android:minSdkVersion="8" /> <application android:icon="@drawable/ic_launcher" android:label="Color Shafted" android:theme="@style/Theme.NoBackground" android:debuggable="false"> <activity android:label="Color Shafted" android:name=".colorshafted.ColorShafted" android:configChanges = "keyboard|keyboardHidden|orientation" android:screenOrientation = "landscape"> <!-- Set as the default run activity --> <intent-filter > <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> <activity android:label="Color Shafted Settings" android:name=".colorshafted.Settings" android:theme="@android:style/Theme" android:configChanges = "keyboard|keyboardHidden"> <!-- --> </activity> </application> <!-- DEFINE PERMISSIONS FOR CAPABILITIES --> <uses-permission android:name = "android.permission.WRITE_EXTERNAL_STORAGE"/> <uses-permission android:name = "android.permission.WAKE_LOCK"/> <uses-feature android:name="android.hardware.touchscreen" android:required="false" /> <!-- END OF PERMISSIONS FOR CAPABILITIES --> </manifest> I'm about to start promoting the app after the next major release so its been kind of a bummer since I can't seem to get this to work. Any help would be appreciated, thanks in advance : )

    Read the article

  • SSAS Cube reprocessing fails - then succeeds if I try again

    - by EdgarVerona
    So I'm basically brand new to the concept of BI, and I've inherited an existing ETL process that is a two step process: 1) Loads the data into a database that is only used by the cube processing 2) Starts off the SSAS cube processing against said database It seems pretty well isolated, but occasionally (once a week, sometimes twice) it will fail with the following exception: "Errors in the OLAP storage engine: The attribute key cannot be found" Now the interesting thing is that: 1) The dimension having the issue is not usually the same one (i.e. there's no single dimension that consistently has this failure) 2) The source table, when I inspect it, does actually contain the attribute key that it says could not be found And, most interestingly... 3) If I then immediately reprocess the dimensions and cubes manually through SSMS, they reprocess successfully and without incident. In both the aforementioned job and when I reprocess them through SSMS, I am using "ProcessFull", so it should be reprocessing them completely. Has anyone run into such an issue? I'm scratching my head about it... because if it was a genuine data integrity issue, reprocessing the cube again wouldn't fix it. What on earth could be happening? I've been tasked with finding out why this happens, but I can neither reproduce it consistently nor can I point to a data integrity problem as the root cause. Thanks for any input you can provide!

    Read the article

  • Database Design Question regaurding duplicate information.

    - by galford13x
    I have a database that contains a history of product sales. For example the following table CREATE TABLE SalesHistoryTable ( OrderID, // Order Number Unique to all orders ProductID, // Product ID can be used as a Key to look up product info in another table Price, // Price of the product per unit at the time of the order Quantity, // quantity of the product for the order Total, // total cost of the order for the product. (Price * Quantity) Date, // Date of the order StoreID, // The store that created the Order PRIMARY KEY(OrderID)); The table will eventually have millions of transactions. From this, profiles can be created for products in different geographical regions (based on the StoreID). Creating these profiles can be very time consuming as a database query. For example. SELECT ProductID, StoreID, SUM(Total) AS Total, SUM(Quantity) QTY, SUM(Total)/SUM(Quantity) AS AvgPrice FROM SalesHistoryTable GROUP BY ProductID, StoreID; The above query could be used to get the Information based on products for any particular store. You could then determine which store has sold the most, has made the most money, and on average sells for the most/least. This would be very costly to use as a normal query run anytime. What are some design descisions in order to allow these types of queries to run faster assuming storage size isn’t an issue. For example, I could create another Table with duplicate information. Store ID (Key), Product ID, TotalCost, QTY, AvgPrice And provide a trigger so that when a new order is received, the entry for that store is updated in a new table. The cost for the update is almost nothing. What should be considered when given the above scenario?

    Read the article

  • Producer/consumer system using database (MySql), is this feasible?

    - by johnrl
    Hi all. I need to use something to coordinate my system with several consumers/producers each running on different machines with different operating systems. I have been researching on using MySql to do this, but it seems ridiculously difficult. My requirements are simple: I want to be able to add or remove consumers/producers at any time and thus they should not depend on each other at all. Naturally a database would separate the two nicely. I have been looking at Q4M message queuing plugin for MySql but it seems complicated to use: I have to recompile it every time I upgrade MySql (can this really be true?) because when I try to install it on Ubuntu 9.10 with MySql 5.1.37 it says "Can't open shared library 'libqueue_engine.so' (errno: 0 API version for STORAGE ENGINE plugin is too different)". There is no precompiled version for MySql 5.1.37 apparently. Also what if I want to run MySql on my windows machine, then I can't rely on this plugin as it only seems to run on Linux and OSX?? I really need some input on how to construct my system best possible.

    Read the article

  • Slow MySQL Query not using filesort

    - by Canadaka
    I have a query on my homepage that is getting slower and slower as my database table grows larger. tablename = tweets_cache rows = 572,327 this is the query I'm currently using that is slow, over 5 seconds. SELECT * FROM tweets_cache t WHERE t.province='' AND t.mp='0' ORDER BY t.published DESC LIMIT 50; If I take out either the WHERE or the ORDER BY, then the query is super fast 0.016 seconds. I have the following indexes on the tweets_cache table. PRIMARY published mp category province author So i'm not sure why its not using the indexes since mp, provice and published all have indexes? Doing a profile of the query shows that its not using an index to sort the query and is using filesort which is really slow. possible_keys = mp,province Extra = Using where; Using filesort I tried adding a new multie-colum index with "profiles & mp". The explain shows that this new index listed under "possible_keys" and "key", but the query time is unchanged, still over 5 seconds. Here is a screenshot of the profiler info on the query. http://i355.photobucket.com/albums/r469/canadaka_bucket/slow_query_profile.png Something weird, I made a dump of my database to test on my local desktop so i don't screw up the live site. The same query on my local runs super fast, milliseconds. So I copied all the same mysql startup variables from the server to my local to make sure there wasn't some setting that might be causing this. But even after that the local query runs super fast, but the one on the live server is over 5 seconds. My database server is only using around 800MB of the 4GB it has available. here are the related my.ini settings i'm using default-storage-engine = MYISAM max_connections = 800 skip-locking key_buffer = 512M max_allowed_packet = 1M table_cache = 512 sort_buffer_size = 4M read_buffer_size = 4M read_rnd_buffer_size = 16M myisam_sort_buffer_size = 64M thread_cache_size = 8 query_cache_size = 128M # Try number of CPU's*2 for thread_concurrency thread_concurrency = 8 # Disable Federated by default skip-federated key_buffer = 512M sort_buffer_size = 256M read_buffer = 2M write_buffer = 2M key_buffer = 512M sort_buffer_size = 256M read_buffer = 2M write_buffer = 2M

    Read the article

  • Testing a wide variety of computers with a small company

    - by Tom the Junglist
    Hello everyone, I work for a small dotcom which will soon be launching a reasonably-complicated Windows program. We have uncovered a number of "WTF?" type scenarios that have turned up as the program has been passed around to the various not-technical-types that we've been unable to replicate. One of the biggest problems we're facing is that of testing: there are a total of three programmers -- only one working on this particular project, me -- no testers, and a handful of assorted other staff (sales, etc). We are also geographically isolated. The "testing lab" consists of a handful of VMWare and VPC images running sort-of fresh installs of Windows XP and Vista, which runs on my personal computer. The non-technical types try to be helpful when problems arise, we have trained them on how to most effectively report problems, and the software itself sports a wide array of diagnostic features, but since they aren't computer nerds like us their reporting is only so useful, and arranging remote control sessions to dig into the guts of their computers is time-consuming. I am looking for resources that allow us to amplify our testing abilities without having to put together an actual lab and hire beta testers. My boss mentioned rental VPS services and asked me to look in to them, however they are still largely very much self-service and I was wondering if there were any better ways. How have you, or any other companies in a similar situation handled this sort of thing? EDIT: According to the lingo, our goal here is to expand our systems testing capacity via an elastic computing platform such as Amazon EC2. At this point I am not sure suggestions of beefing up our unit/integration testing are going to help very much as we are consistently hitting walls at the systems testing phase. Has anyone attempted to do this kind of software testing on a cloud-type service like EC2? Tom

    Read the article

  • Problems pulling data from NSMutableArray for MapKit?

    - by Graeme
    Hi, I have a class (DataImporter) which has the code to download an RSS feed. I also have a view and separate class (TableView) which displays the data in a UITableView and starts the parsing process, storing parsed information in an NSMutableArray (items). Now I wish to add a UIMapView which displays the items in the items NSMutableArray. I'm struggling to transfer the data into the new class (mapView) to show it in the map - and I preferably don't want to have to create a new class to download the data again for the map. Is there a way I can transfer the information from the NSMutableArray (items) to the mapView? Thanks. Code for viewDidLoad MapKit: Data *data = nil; NSString *ilocation = [data locations]; NSString *ilocation2 = @"New Zealand"; NSString *inewlString; inewlString = [ilocation stringByAppendingString:ilocation2]; NSLog(@"inewlString=%@",inewlString); if(forwardGeocoder == nil) { forwardGeocoder = [[BSForwardGeocoder alloc] initWithDelegate:self]; } // Forward geocode! [forwardGeocoder findLocation: inewlString]; Code for parsing data into NSMutable Array: - (void)beginParsing { NSLog(@"Parsing has begun"); //self.navigationItem.rightBarButtonItem.enabled = NO; // Allocate the array for song storage, or empty the results of previous parses if (incidents == nil) { NSLog(@"Grabbing array"); self.datas = [NSMutableArray array]; } else { [datas removeAllObjects]; [self.tableView reloadData]; } // Create the parser, set its delegate, and start it. self.parser = [[DataImporter alloc] init]; parser.delegate = self; [parser start]; }

    Read the article

  • Django sphinx works only after app restart.

    - by Lhiash
    Hi, I've set up django-sphinx in my project, which works perfectly only for some time. Later it always returns empty result set. Surprisingly restarting django app fixes it. And search works again but again only for short time (or very limiter number of queries). Heres my sphinx.conf: source src_questions { # data source type = mysql sql_host = xxxxxx sql_user = xxxxxx #replace with your db username sql_pass = xxxxxx #replace with your db password sql_db = xxxxxx #replace with your db name # these two are optional sql_port = xxxxxx #sql_sock = /var/lib/mysql/mysql.sock # pre-query, executed before the main fetch query sql_query_pre = SET NAMES utf8 # main document fetch query sql_query = SELECT q.id AS id, q.title AS title, q.tagnames AS tags, q.html AS text, q.level AS level \ FROM question AS q \ WHERE q.deleted=0 \ # optional - used by command-line search utility to display document information sql_query_info = SELECT title, id, level FROM question WHERE id=$id sql_attr_uint = level } index questions { # which document source to index source = src_questions # this is path and index file name without extension # you may need to change this path or create this folder path = /home/rafal/core_index/index_questions # docinfo (ie. per-document attribute values) storage strategy docinfo = extern # morphology morphology = stem_en # stopwords file #stopwords = /var/data/sphinx/stopwords.txt # minimum word length min_word_len = 3 # uncomment next 2 lines to allow wildcard (*) searches min_infix_len = 1 enable_star = 1 # charset encoding type charset_type = utf-8 } # indexer settings indexer { # memory limit (default is 32M) mem_limit = 64M } # searchd settings searchd { # IP address on which search daemon will bind and accept # optional, default is to listen on all addresses, # ie. address = 0.0.0.0 address = 127.0.0.1 # port on which search daemon will listen port = 3312 # searchd run info is logged here - create or change the folder log = ../log/sphinx.log # all the search queries are logged here query_log = ../log/query.log # client read timeout, seconds read_timeout = 5 # maximum amount of children to fork max_children = 30 # a file which will contain searchd process ID pid_file = searchd.pid # maximum amount of matches this daemon would ever retrieve # from each index and serve to client max_matches = 1000 } and heres my search part from views.py: content = Question.search.query(keywords) if level: content = content.filter(level=level)#level is array of integers There are no errors in any logs, it just isnt returning any results. All help would be most appreciated.

    Read the article

  • How can I use PHP Mail() function within PHP-FPM? On Nginx?

    - by TheBlackBenzKid
    I have searched everywhere for this and I really want to resolve this. In the past I just end up using an SMTP service like SendGrid for PHP and a mailing plugin like SwiftMailer. However I want to use PHP. Basically my setup (I am new to server setup, and this is my personal setup following a tutorial) Nginx Rackspace Cloud PHP 5.3 PHP-FPM Ubuntu 11.04 My phpinfo() returns this about the Mail entries: mail.log no value mail.add_x_header On mail.force_extra_parameters no value sendmail_from no value sendmail_path /usr/sbin/sendmail -t -i SMTP localhost smtp_port 25 Can someone help me to as why Mail() will not work - my script is working on all other sites, it is a normal mail command. Do I need to setup logs or enable some PHP port on the server? My Sample script <? # FORMS VARS // $to = $customers_email; // $to = $customers_email; $to = $_GET["customerEmailFromForm"]; $subject = "Thank you for contacting Real-Domain.com"; $message = " <html> <head> </head> <body> Thanks, your message was sent and our team will be in touch shortly. <img src='http://cdn.com/emails/thank_you.jpg' /> </body> </html> "; $headers = "MIME-Version: 1.0" . "\r\n"; $headers .= "Content-type:text/html;charset=iso-8859-1" . "\r\n"; $headers .= 'From: <[email protected]>' . "\r\n"; // SEND MAIL mail($to,$subject,$message,$headers); ?> Thanks

    Read the article

  • How to stop ejabberd from using mnesia

    - by Eldad Mor
    I'm trying to establish a procedure for restoring my database from a crashed server to a new server. My server is running Ejabberd as an XMPP server, and I configured it to use postgresql instead of mnesia - or so I thought. My procedure goes something like "dump the contents of the original DB, run the new server, restore the contents of the DBs using psql, then run the system". However, when I try running Ejabberd again I get a crash: =CRASH REPORT==== 3-Dec-2010::22:05:00 === crasher: pid: <0.36.0> registered_name: [] exception exit: {bad_return,{{ejabberd_app,start,[normal,[]]}, {'EXIT',"Error reading Mnesia database"}}} in function application_master:init/4 Here I was thinking that my system is running on PostgreSQL, while it seems I was still using Mnesia. I have several questions: How can I make sure mnesia is not being used? How can I divert all the ejabberd activities to PGSQL? This is the modules part in my ejabberd.cfg file: {modules, [ {mod_adhoc, []}, {mod_announce, [{access, announce}]}, % requires mod_adhoc {mod_caps, []}, {mod_configure,[]}, % requires mod_adhoc {mod_ctlextra, []}, {mod_disco, []}, {mod_irc, []}, {mod_last_odbc, []}, {mod_muc, [ {access, muc}, {access_create, muc}, {access_persistent, muc}, {access_admin, muc_admin}, {max_users, 500} ]}, {mod_offline_odbc, []}, {mod_privacy_odbc, []}, {mod_private_odbc, []}, {mod_pubsub, [ % requires mod_caps {access_createnode, pubsub_createnode}, {plugins, ["default", "pep"]} ]}, {mod_register, [ {welcome_message, none}, {access, register} ]}, {mod_roster_odbc, []}, {mod_stats, []}, {mod_time, []}, {mod_vcard_odbc, []}, {mod_version, []} ]}. What am I missing? I am assuming the crash is due to the mnesia DB being used by Ejabberd, and since it's out of sync with the PGSQL DB, it cannot operate correctly - but maybe I'm totally off track here, and would love some direction. EDIT: One problem solved. Since I'm using amazon cloud, I needed to hardcode the ERLANG_NODE so it won't be defined by the hostname (which changes on reboot). This got my ejabberd running, but still I wish to stop using mnesia, and I wonder what part of ejabberd is still using it and how can I found it.

    Read the article

  • Security implications of writing files using PHP

    - by susmits
    I'm currently trying to create a CMS using PHP, purely in the interest of education. I want the administrators to be able to create content, which will be parsed and saved on the server storage in pure HTML form to avoid the overhead that executing PHP script would incur. Unfortunately, I could only think of a few ways of doing so: Setting write permission on every directory where the CMS should want to write a file. This sounds like quite a bad idea. Setting write permissions on a single cached directory. A PHP script could then include or fopen/fread/echo the content from a file in the cached directory at request-time. This could perhaps be carried out in a Mediawiki-esque fashion: something like index.php?page=xyz could read and echo content from cached/xyz.html at runtime. However, I'll need to ensure the sanity of $_GET['page'] to prevent nasty variations like index.php?page=http://www.bad-site.org/malicious-script.js. I'm personally not too thrilled by the second idea, but the first one sounds very insecure. Could someone please suggest a good way of getting this done?

    Read the article

  • Making two Windows using CreateWindowsEx() - C

    - by Jamie Keeling
    Hello, I have a windows form that has a simple menu and performs a simple operation, I want to be able to create another windows form with all the functionality of a menu bar, message pump etc.. as a separate thread so I can then share the results of the operation to the second window. I.E. 1) Form A opens Form B opens as a separate thread 2)Form A performs operation 3)Form A passes results via memory to Form B 4)Form B display results I'm confused as to how to go about it, the main app runs fine but i'm not sure how to add a second window if the first one already exists. I think that using CreateWindow will allow me to make another window but again i'm not sure how to access the message pump so I can respond to certain events like WM_CREATE on the second window. I hope it makes sense. Thanks! Edit: I've attempted to make a second window and although this does compile, no windows show atall on build. ////////////////////// // WINDOWS FUNCTION // ////////////////////// LRESULT CALLBACK WindowFunc(HWND hMainWindow, UINT message, WPARAM wParam, LPARAM lParam) { //Fields WCHAR buffer[256]; struct DiceData storage; HWND hwnd; // Act on current message switch(message) { case WM_CREATE: AddMenus(hMainWindow); hwnd = CreateWindowEx( 0, "ChildWClass", (LPCTSTR) NULL, WS_CHILD | WS_BORDER | WS_VISIBLE, 0, 0, 0, 0, hMainWindow, NULL, NULL, NULL); ShowWindow(hwnd, SW_SHOW); break; Any suggestions as to why this happens?

    Read the article

  • XML comments on delegate declared events

    - by Matt Whitfield
    I am visiting some old code, and there are quite a few events declared with delegates manually rather than using EventHandler<T>, like this: /// <summary> /// Delegate for event Added /// </summary> /// <param name="index">Index of the item</param> /// <param name="item">The item itself</param> public delegate void ItemAdded(int index, T item); /// <summary> /// Added is raised whenever an item is added to the collection /// </summary> public event ItemAdded Added; All well and good, until I come to use sandcastle to document the library, because it then can't find any XML comments for the private Added field that is generated by the event declaration. I want to try and sort that out, but what I would like to do is either: Get sandcastle to ignore the auto-generated private field without telling it to ignore all private fields entirely or Get XML comments generated for the private field Is there any way of achieving this without re-factoring the code to look like this: /// <summary> /// Delegate for event <see cref="Added"/> /// </summary> /// <param name="index">Index of the item</param> /// <param name="item">The item itself</param> public delegate void ItemAdded(int index, T item); /// <summary> /// Private storage for the event firing delegate for the <see cref="Added"/> event /// </summary> private ItemAdded _added; /// <summary> /// Added is raised whenever an item is added to the collection /// </summary> public event ItemAdded Added { add { _added += value; } remove { _added -= value; } }

    Read the article

  • Managing/Storing complex application form

    - by mickyjtwin
    I am developing an online application form, which can be of two categories, either domestic/international. Each different type has commong questions, and also specific questions. They are approx 6 pages/steps of questions. The application form can also be saved at the end of each step, and can be completed at a later date. Some questions will vary depending on answers to previous questions, so there are some complex business rules. In terms of storage of results, would it be best to store the answers in and XML field in SQL? What would be the best way to reference a question to an answer. There is no need for the form to be dynamic in rendering questions etc. e.g. <Application id="123" type="Domestic"> <Answers> <EmailAddress>[email protected]</EmailAddress> <HomePhone>555-5555</HomePhone> <Suburb>MySuburb</Suburb> <Guardians> <Guardian type="Mother"> <FirstName>Mom</FirstName> </Guardian> </Guardians> <Answers> </Application>

    Read the article

  • Capturing time intervals when somebody was online? How would you impement this feature?

    - by Kirzilla
    Hello, Our aim is to build timelines saying about periods of time when user was online. (It really doesn't matter what user we are talking about and where he was online) To get information about onliners we can call API method, someservice.com/api/?call=whoIsOnline whoIsOnline method will give us a list of users currently online. But there is no API method to get information about who IS NOT online. So, we should build our timelines using information we got from whoIsOnline. Of course there will be a measurement error (we can't track information in realtime). Let's suppose that we will call whoIsOnline method every 2 minutes (yes, we will run our script by cron every 2 minutes). For example, calling whoIsOnline at 08:00 will return Peter_id Michal_id Andy_id calling whoIsOnline at 08:02 will return Michael_id Andy_id George_id As you can see, Peter has gone offline, but we have new onliner - George. Available instruments are Db(MySQL) / text files / key-value storage (Redis/memcache); feel free to choose any of them (or even all of them). So, we have to get information like this George_id was online... 12 May: 08:02-08:30, 12:40-12:46, 20:14-22:36 11 May: 09:10-12:30, 21:45-23:00 10 May: was not online And now question... How would you store information to implement such timelines? How would you query/calculate information about periods of time when user was online? Additional information.. You cannot update information about offline users, only users who are "currently" online. Solution should be flexible: timeline information could be represented relating to any timezone. We should keep information only for last 7 days. Every user seen online is automatically getting his own identifier in our database. Uff.. it was really hard for me to write it because my English is pretty bad, but I hope my question will be clear for you. Thank you.

    Read the article

  • Old desktop programmer wants to create S+S project

    - by Craig
    I have an idea for a product that I want to be web-based. But because I live in Brasil, the internet is not always available so there needs to be a desktop component that is available for when the internet is down. Also, I have been a SQL programmer, a desktop application programmer using dBase, VB and Pascal, and I have created simple websites using HTML and website creation tools, such as Frontpage. So from my research, I think I have the following options; PHP, Ruby on Rails, Python or .NET for the programming side. MySQL for the DB. And Apache, or possibly IIS, for the webserver. I will probably start with a local ISP provider for the cloud servce. But then maybe move to something more "robust" and universal in the future, ie. Amazon, or Azure, or something along that line. My question then is this. What would you recommend for something like this? I'm sure that I have not listed all of the possibilities, but the ones I have researched and thought of. Thanks everyone, Craig

    Read the article

  • NSFetchedResultsController: changing predicate not working?

    - by icerelic
    Hi, I'm writing an app with two tables on one screen. The left table is a list of folders and the right table shows a list of files. When tapped on a row on the left, the right table will display the files belonging to that folder. I'm using Core Data for storage. When the selection of folder changes, the fetch predicate of the right table's NSFetchedResultsController will change and perform a new fetch, then reload the table data. I used the following code snippet: NSPredicate *predicate = [NSPredicate predicateWithFormat:@"list = %@",self.list]; [fetchedResultsController.fetchRequest setPredicate:predicate]; NSError *error = nil; if (![[self fetchedResultsController] performFetch:&error]) { NSLog(@"Unresolved error %@, %@", error, [error userInfo]); abort(); } [table reloadData]; However the fetch results are still the same. I've NSLog'ed "predicate" before and after the fetch, and they were correct with updated information. The fetch results stay the same as initial fetch (when view is loaded). I'm not very familiar with the way Core Data fetches objects (is there a caching system?), but I've done similar things before(changing predicates, re-fetching data, and refreshing table) with single table views and everything went well. If someone could gave me a hint I would be very appreciated. Thanks in advance.

    Read the article

< Previous Page | 330 331 332 333 334 335 336 337 338 339 340 341  | Next Page >