Search Results

Search found 172 results on 7 pages for 'ban'.

Page 5/7 | < Previous Page | 1 2 3 4 5 6 7  | Next Page >

  • HOUG konferencia 2010., kapunyitás ma!

    - by Fekete Zoltán
    MA KEZDODIK! A helyszínen még lehet regisztrálni, azaz a Ramada Hotel & Resort Lake Balaton szállodában. 2010. március 22-24 között találkozzunk Balatonalnádiban! A mai napon szakmai programokkal elkezdodik a HOUG Konferencia 2010. A magyarországi Oracle-felhasználók éves rendezvényén sok felhasználó számol be Oracle rendszerérol, tapasztalatairól, a rendszerek gazdasági hasznosságáról. A konferencia programja. - kedden az államigazgatási szekcióban a következo eloadást tartom: Ideális nagy teljesítményu hibaturo környezet felhasználási lehetoségei a kormányzati projektekhez - Oracle Exadata, Database Machine - szerdán az Üzleti intelligencia és adattárház szekció vezetpje leszek, továbbá fogok eloadást tartani a következo címmel: Az ideális OLTP és DW környezet az Oracle adatbázisoknak, Oracle Exadata, Database Machine Szerdán számos érdekes eloadást fogunk meghallgatni: - Management Excellence - az Oracle Hyperion EPM alkalmazásokkal Ribarics Pál - SZEZÁM - Üzleti intelligencia megoldások a Magyar Nemzeti Vagyonkezelo Zrt. életében Holl Zoltán - JD Edwards EnterpriseOne és Oracle BI EE, a Fornetti recept: lekvár a sütibe Bitter Tibor (E-best Kft.), Király János (Fornetti Kft.) - Tárházak a gázra lépve (új utak felé) Kránicz László (OTP Bank Nyrt.) - Oracle-Hyperion Interactive Reporting végfelhasználói, ad-hoc lekérdezo eszköz bevezetése a KSH-ban és a használat tapasztalatai Pap Imre (Központi Statisztikai Hivatal) - Az ideális OLTP és DW környezet az Oracle adatbázisoknak Fekete Zoltán (Oracle Hungary Kft.) - BI Suite bevezetés az MKB-Euroleasing-nél Mitró Péter (MKB Euroleasing Autóhitel Zrt.) - Essbase alapú tervezõ rendszer a Bay Zoltán Alkalmazott Kutatási Közalapítványnál Hoffman Zoltán (Bay Zoltán Alkalmazott Kutatási Közalapítvány), Szabó Gábor (R&R Software Zrt.) - Adattárház-megvalósítás Oracle alapokon a National Instrumentsnél Vágó Csaba, Németh Márk (National Instruments Hungary Kft.) - Banki adatpiac bevezetése adattárház alapokon Dési Balázs (HP Magyarország Kft.)

    Read the article

  • MDX lekérdezések az Oracle OLAP-hoz

    - by Fekete Zoltán
    Az Oracle OpenWord-ön, 2009. október 12-én jelentette be az Oracle, hogy elkészült a Simba Technologies MDX eszköze az Oracle OLAP eléréséhez: Oracle and Simba Technologies Introduce MDX Provider for Oracle® OLAP. Az MDX Provider for Oracle® OLAP eszközzel közvetlenül az Excel felületrol lehet elérni az Oracle OLAP multidimenziós (multidimenzionális) motor által kezelt adatokat. Az MDX Provider for Oracle OLAP esköz lehetové teszi, hogy az Excel kereszttábla/pivott'bla (PivotTable) és PivotChart funkciókat közvetlenül használjuk az Oracle OLAP-ban tárolt adatvagyon ékszerek eléréséhez. :) - könnyen kihasználhatjuk az Oracle Database OLAP nagy sebességét a lekérdezési és a számítási oldalon is - támogatott táblázatkezelo és adatbázis-kezelo platformok: Microsoft Excel 2007 / 2003 és Oracle Database 11g Release 1 és Release 2. Az Oracle OLAP az Oracle Database EE-ben érheto el, annak opciójaként. Az Oracle a hírös és régebben csinos rekordokat is felmutató Oracle Express Server-bol fejlesztette ki az Oracle OLAP-ot, ami az adatbáziskezelo szerver részeként muködik. Technikai OLAP információ. Mire is jó az Oracle OLAP: - az üzleti szakemberek gondolkodásához közel álló elemzési lehetoséget nyújt - kifinomult analitikus lekérdezések elvégzése - hatalmas lekérdezési sebesség, apró futási idok bármilyen mennyiségu adatra - komoly számítási sebesség óriási adatmennyiségen is - gyors aggregációk - SQL-bol is kezelhetok és lekérdezhetok az OLAP adatok! - a cube-organised materialized views alkalmazásával a relációs részletes adatok mögé transzparens aggregációs szinteket helyezhetünk el könnyen Az MDX Provider for Oracle OLAP eszköz a következo helyen letöltheto és kipróbálható: http://www.simba.com/MDX-Provider-for-Oracle-OLAP.htm.

    Read the article

  • Egy konferencia marg&oacute;j&aacute;ra

    - by peter.nagy
    Nem akarok provokátornak tunni, de lenne egy-két észrevételem az amúgy jól sikerült Open Source 2011 konferencia kapcsán. Persze a mi rendezvényeinkre is lehet panasz, amit szívesen is veszünk, hogy tanuljunk belole. Szóval nem sikerült az elektronikus regisztráció, pedig még fel is hívtak elotte, meg minden. Ennek ellenére a helyszíni listában mégsem voltam benne. Persze gyorsan megoldották, de azért mégis egy informatikai konferenciáról van szó. Ha már open source, akkor tényleg olyan nehéz lett volna Linuxos gépeket, odahozni OpenOffice (vagy LibreOffice, vagy akármi) telepítéssel. Volt is minden eloadásváltásnál megjegyzés. Azt már nem is említem, hogy persze a néhány kivételtol eltekintve a legtöbben ppt hoztak. Persze egy részük készülhetett OpenOffice-ban is. Mondjuk erre azért fogadnék. Persze volt aki nem ppt-ben hozta és még fel is hívta rá a figyelmet, hogy bezzeg o nem a Microsoft eszközeivel ad elo. Helyette azért egy másik fizetossel sikerült elmondani, hogy milyen jó, hogy nem kerül semmibe az open source. Ami amúgy nagyon jó prezentációs alkalmazás. (Jutalom nélkül, mi lehetett az? Válaszokat ide várom a blogra.) A tartalom, mint mondtam érdekes volt. Persze lenne min vitatkozni, de ezt esetleg majd a konkrét téma kapcsán. Idén nem vettünk részt eloadóként, de szerintem jövore ez már változhat. Az esti program is nagyon jó volt, különösen Soma buvész lenyugözo trükkjei.

    Read the article

  • Get to Know a Candidate (5 of 25): Jim Carlson&ndash;Grassroots Party

    - by Brian Lanham
    DISCLAIMER: This is not a post about “Romney” or “Obama”. This is not a post for whom I am voting. Information sourced for Wikipedia. Carlson is an American businessman and the Grassroots Party nominee. Carlson is the owner of Last Place on Earth, a head shop located in Duluth, Minnesota. In September 2011, the shop was raided by police for selling bath salts and synthetic marijuana. After the raid, Carlson filed a lawsuit to strike down Minnesota's ban on the substances. His suit was dismissed by the court in November 2011. The Grassroots Party was created in the 1980s to oppose drug prohibition.  The party shares many of the the political leftist values of the Green Party but with a greater emphasis on marijuana/hemp legalization issues.  The permanent platform of the Grassroots Party is the Bill of Rights. Individual candidate's positions on issues vary from Libertarian to Green. All Grassroots candidates would end marijuana/hemp prohibition and re-legalize Cannabis for all its uses. Learn more about Jim Carlson and Grassroots Party on Wikipedia.

    Read the article

  • What is the most time-effective way to monitor & manage threats from bots and/or humans?

    - by CheeseConQueso
    I'm usually overwhelmed by the amount of tools that hosting companies provide to track & quantify traffic data and statistics. I'm equally overwhelmed by the countless flavors of malicious 'attacks' that target any and every web site known to man. The security methods used to protect both the back and front end of a website are documented well and are straight-forward in terms of ease of implementation and application, but the army of autonomous bots knows no boundaries and will always find a niche of a website to infest. So what can be done to handle the inevitable swarm of bots that pound your domain with brute force? Whenever I look at error logs for my domains, there are always thousands of entries that look like bots trying to sneak sql code into the database by tricking the variables in the url into giving them schema information or private data within the database. My barbaric and time-consuming plan of defense is just to monitor visitor statistics for those obvious patterns of abuse and either ban the ips or range of ips accordingly. Aside from that, I don't know much else I could do to prevent all of the ping pong going on all day. Are there any good tools that automatically monitor this background activity (specifically activity that throws errors on the web & db server) and proactively deal with these source(s) of mayhem?

    Read the article

  • Malaysian Airlines bans kids under 12, creates separate cabins in basement of flight

    - by Gopinath
    Kids are lot of fun to watch and play as long as they don’t start crying. Once they start crying it’s tough job for parents to calm them down and for people around it’s painful to be part of it. If it happens to be on a flight, it’s a biggest annoyance one can ever experience. Especially on long journey over night flights, it’s a nightmare for passengers if couple of kids are uncontrollable. After receiving many complaints from its passengers who are disturbed by kids in flight, Malaysian Airlines decided to ban kids under 12 in their regular Economy class cabins of new Airbus A380s. Parents with under 12 years old kids are allowed only in to special kids zone created in the basement of  the multi-storied jumbo flights Airbus A380s. May be parents with under 12 kids does not appreciate this move, but the rest of travellers would be happy. Back in June 2011 Malaysian Airlines banned infants in first class of its Boeing 747-400 jets. The CEO of Malaysian Airlines defended on twitter about the decision as first classers spend pricy amount for a comfortable journey.  So if you are a parent of kids  under 12, think twice before you book tickets on Malaysian Airlines. Creative commons image courtesy: flickr/transworld

    Read the article

  • Why do people crawl sites without downloading pictures?

    - by Michael
    Let me show you what I mean: IP Pages Hits Bandwidth 85.xx.xx.xxx 236 236 735.00 KB 195.xx.xxx.xx 164 164 533.74 KB 95.xxx.xxx.xxx 90 90 293.47 KB It's very clear that these person are crawling my site with bots. There's no way that you could visit my site and use <1MB bandwidth. You might say that there's the possibility that they could be browsing the site using some browser or plug-in that does not download images, js/css files, etc., but the simple fact of the matter is that there are not 90-236 pages that are linked from the home page (outside of WP files), even if you visited every page twice. I could understand if these people were crawling the site for pictures, but once again, the bandwidth indicates that this isn't what is happening. Why, then, would they crawl the site to simply view the HTML/txt/js/etc. files? The only thing that I can come up with is that they are scanning for outdated versions of WordPress, SQL injection vulnerabilities, etc., which makes me inclined to outright ban the IPs, but I'm curious, is it possible that this person is a legitimate user, or at the very least, not intending to be harmful?

    Read the article

  • An adequate message authentication code for REST

    - by Andras Zoltan
    My REST service currently uses SCRAM authentication to issue tokens for callers and users. We have the ability to revoke caller privileges and ban IPs, as well as impose quotas to any type of request. One thing that I haven't implemented, however, is MAC for requests. As I've thought about it more, for some requests I think this is needed, because otherwise tokens can be stolen and before we identify this and deactivate the associated caller account, some damage could be done to our user accounts. In many systems the MAC is generated from the body or query string of the request, however this is difficult to implement as I'm using the ASP.Net Web API and don't want to read the body twice. Equally importantly I want to keep it simple for callers to access the service. So what I'm thinking is to have a MAC calculated on: the url, possibly minus query string the verb the request ip (potentially is a barrier on some mobile devices though) utc date and time when the client issues the request. For the last one I would have the client send that string in a request header, of course - and I can use it to decide whether the request is 'fresh' enough. My thinking is that whilst this doesn't prevent message body tampering it does prevent using a model request to use as a template for different requests later on by a malicious third party. I believe only the most aggressive man in the middle attack would be able to subvert this, and I don't think our services offer any information or ability that is valuable enough to warrant that. The services will use SSL as well, for sensitive stuff. And if I do this, then I'll be using HMAC-SHA-256 and issuing private keys for HMAC appropriately. Does this sound enough? Have I missed anything? I don't think I'm a beginner when it comes to security, but when working on it I always. am shrouded in doubt, so I appreciate having this community to call upon!

    Read the article

  • Purchase Vouchers From A Reputable Source

    - by Harold Green
    We have seen a recent increase in counterfeit vouchers being marketed online and we want to make sure our Candidates are aware of the risks of purchasing vouchers from unauthorized sellers. Please be advised that only Oracle University and Oracle University authorized resellers may sell vouchers for Oracle Certification exams. If you purchase a voucher from any other source, your voucher may not be valid and you run the risk of program sanctions from Oracle which could include a lifetime ban on taking Oracle Certification exams. Be sure your voucher is from an authorized source: Oracle University Oracle Authorized Reseller If you are unsure whether your voucher seller is an Authorized Reseller: Call Oracle University to confirm. Check for the official Oracle Reseller logo on the website. Ebay, Craigslist, etc are not authorized resale avenues. The only exceptions to the above sources are vouchers from programs that provide a discount on exams, or vouchers from your employer who has purchased them through their partner program or with learning credits. These vouchers may not purchased by you, but may be provided to you from: Oracle Academy Oracle Workforce Development Partner Your employer who has purchased vouchers directly from Oracle  This investment is too important to trust to chance. Be sure that you are purchasing your voucher from a reputable source so that you can free your mind to prepare for your exam. View the full Oracle Certification Exam Voucher Use Policy.

    Read the article

  • Does Altova StyleVision support generation of these specific Word XML Word ML List Numbering Bullet Markup? Extend with custom external XSLT?

    - by Alex S
    Does Altova StyleVision support generation of these specific Word XML Word ML List Numbering Bullet Markup? Extend with custom external XSLT? PS: I know is specific to Altova and their Dev Tools, but just like Eclipse and Visual Studio it is one of the widest used toolkits for XML related development & programming. So, please do not hate, ban or give negative. Linked below is a section of information for Word ML XML and its numbering, list, bullet etc. The markup is pretty extensive. I am wondering if this can be replicated via StyleVision or is this a limitation that needs to extended with an external XSLT? Quote: Key links to the Markup Documentation: http://officeopenxml.com/WPnumbering.php http://officeopenxml.com/WPnumberingAbstractNum.php Also: /WPnumberingLvl.php Short outline of the Documentation there: *Numbering, Levels and Lists* - Overview - Defining a Numbering Scheme - Defining a Particular Level ++ Numbering Level Text ++ Numbering Format ++ Displaying as Numerals Only ++ Restart Numbering ++ Picture or Image as Numbering Symbol ++ Justification ++ Overriding a Numbering Definition If StyleVision supports the above, where and how inside StyleVision can I access or use these properties/ attributes for the markup? From what I've gathered, I think it does not. In the past, I have written XSL-FO and XSL-WordML by hand. So I could write an add-on external XSLT containing Word specific markup for this purpose. *Given the limitation exists, the questions now: * Where and how do I create and linked inside of StyleVision so as to APPLY and EXTEND these capability limitations of StyleVision. AND How could I make it apply only for Word ML / Word XML output styling and be DEACTIVATED/ DISABLED for HTML and PDF output?

    Read the article

  • Getting rank for keywords that I don't want to appear on my website [duplicate]

    - by Rober
    This question already has an answer here: Which keyword should I use. colors or colours or a combination of both? 2 answers One of my products has two names. One of them is what I consider correct and thus it is what I want to appear on my website. The other name is incorrect for me, so I would like to avoid it. But I know that many people will search my product using the "bad" name. How could I get the "bad" name indexed for my site on search engines even if nobody can read it there? Of course, I want to do it "legally" so that no engine will ban my site considering it as cloaking, black hat SEO, etc... EDIT: Having that "bad" name on my backlinks is not an option. For example I would perceive user reviews connecting my site to that word as a negative point. Maybe having my site as a search result for that word could be negative as well, but I think it is worth it.

    Read the article

  • What are some techniques I can use to refactor Object Oriented code into Functional code?

    - by tieTYT
    I've spent about 20-40 hours developing part of a game using JavaScript and HTML5 canvas. When I started I had no idea what I was doing. So it started as a proof of concept and is coming along nicely now, but it has no automated tests. The game is starting to become complex enough that it could benefit from some automated testing, but it seems tough to do because the code depends on mutating global state. I'd like to refactor the whole thing using Underscore.js, a functional programming library for JavaScript. Part of me thinks I should just start from scratch using a Functional Programming style and testing. But, I think refactoring the imperative code into declarative code might be a better learning experience and a safer way to get to my current state of functionality. Problem is, I know what I want my code to look like in the end, but I don't know how to turn my current code into it. I'm hoping some people here could give me some tips a la the Refactoring book and Working Effectively With Legacy Code. For example, as a first step I'm thinking about "banning" global state. Take every function that uses a global variable and pass it in as a parameter instead. Next step may be to "ban" mutation, and to always return a new object. Any advice would be appreciated. I've never taken OO code and refactored it into Functional code before.

    Read the article

  • Allow (and correct the URL) when there is a special character such as %26 using IIS and the rewrite module

    - by plumtreematt
    I'm struggling with a legacy app that uses special characters like %26 in the URL. The characters don't affect the app but can't be changed, so I'm trying to get IIS to deal with them. I've tried to ignore them using multiple methods, but nothing seems to work. So now I installed the IIS rewrite module and added a rewrite rule to web.config to replace the characters %26 with _, for example: <rewrite> <rules> <rule name="ampersand" patternSyntax="Wildcard" stopProcessing="true"> <match url="*%26*" /> <action type="Redirect" url="{R:1}_{R:2}" /> </rule> </rules> </rewrite> The problem is that IIS responds with "Bad Request" before the rewrite rule ever gets called. So my question is this: how can I change the order of precedence so that the mod rewrite filter will be called before IIS puts the ban hammer down on that URL?

    Read the article

  • Another Marketing Conference, part one – the best morning sessions.

    - by Roger Hart
    Yesterday I went to Another Marketing Conference. I honestly can’t tell if the title is just tipping over into smug, but in the balance of things that doesn’t matter, because it was a good conference. There was an enjoyable blend of theoretical and practical, and enough inter-disciplinary spread to keep my inner dilettante grinning from ear to ear. Sure, there was a bumpy bit in the middle, with two back-to-back sales pitches and a rather thin overview of the state of the web. But the signal:noise ratio at AMC2012 was impressively high. Here’s the first part of my write-up of the sessions. It’s a bit of a mammoth. It’s also a bit of a mash-up of what was said and what I thought about it. I’ll add links to the videos and slides from the sessions as they become available. Although it was in the morning session, I’ve not included Vanessa Northam’s session on the power of internal comms to build brand ambassadors. It’ll be in the next roundup, as this is already pushing 2.5k words. First, the important stuff. I was keeping a tally, and nobody said “synergy” or “leverage”. I did, however, hear the term “marketeers” six times. Shame on you – you know who you are. 1 – Branding in a post-digital world, Graham Hales This initially looked like being a sales presentation for Interbrand, but Graham pulled it out of the bag a few minutes in. He introduced a model for brand management that was essentially Plan >> Do >> Check >> Act, with Do and Check rolled up together, and went on to stress that this looks like on overall business management model for a reason. Brand has to be part of your overall business strategy and metrics if you’re going to care about it at all. This was the first iteration of what proved to be one of the event’s emergent themes: do it throughout the stack or don’t bother. Graham went on to remind us that brands, in so far as they are owned at all, are owned by and co-created with our customers. Advertising can offer a message to customers, but they provide the expression of a brand. This was a preface to talking about an increasingly chaotic marketplace, with increasingly hard-to-manage purchase processes. Services like Amazon reviews and TripAdvisor (four presenters would make this point) saturate customers with information, and give them a kind of vigilante power to comment on and define brands. Consequentially, they experience a number of “moments of deflection” in our sales funnels. Our control is lessened, and failure to engage can negatively-impact buying decisions increasingly poorly. The clearest example given was the failure of NatWest’s “caring bank” campaign, where staff in branches, customer support, and online presences didn’t align. A discontinuity of experience basically made the campaign worthless, and disgruntled customers talked about it loudly on social media. This in turn presented an opportunity to engage and show caring, but that wasn’t taken. What I took away was that brand (co)creation is ongoing and needs monitoring and metrics. But reciprocally, given you get what you measure, strategy and metrics must include brand if any kind of branding is to work at all. Campaigns and messages must permeate product and service design. What that doesn’t mean (and Graham didn’t say it did) is putting Marketing at the top of the pyramid, and having them bawl demands at Product Management, Support, and Development like an entitled toddler. It’s going to have to be collaborative, and session 6 on internal comms handled this really well. The main thing missing here was substantiating data, and the main question I found myself chewing on was: if we’re building brands collaboratively and in the open, what about the cultural politics of trolling? 2 – Challenging our core beliefs about human behaviour, Mark Earls This was definitely the best show of the day. It was also some of the best content. Mark talked us through nudging, behavioural economics, and some key misconceptions around decision making. Basically, people aren’t rational, they’re petty, reactive, emotional sacks of meat, and they’ll go where they’re led. Comforting stuff. Examples given were the spread of the London Riots and the “discovery” of the mountains of Kong, and the popularity of Susan Boyle, which, in turn made me think about Per Mollerup’s concept of “social wayshowing”. Mark boiled his thoughts down into four key points which I completely failed to write down word for word: People do, then think – Changing minds to change behaviour doesn’t work. Post-rationalization rules the day. See also: mere exposure effects. Spock < Kirk - Emotional/intuitive comes first, then we rationalize impulses. The non-thinking, emotive, reactive processes run much faster than the deliberative ones. People are not really rational decision makers, so  intervening with information may not be appropriate. Maximisers or satisficers? – Related to the last point. People do not consistently, rationally, maximise. When faced with an abundance of choice, they prefer to satisfice than evaluate, and will often follow social leads rather than think. Things tend to converge – Behaviour trends to a consensus normal. When faced with choices people overwhelmingly just do what they see others doing. Humans are extraordinarily good at mirroring behaviours and receiving influence. People “outsource the cognitive load” of choices to the crowd. Mark’s headline quote was probably “the real influence happens at the table next to you”. Reference examples, word of mouth, and social influence are tremendously important, and so talking about product experiences may be more important than talking about products. This reminded me of Kathy Sierra’s “creating bad-ass users” concept of designing to make people more awesome rather than products they like. If we can expose user-awesome, and make sharing easy, we can normalise the behaviours we want. If we normalize the behaviours we want, people should make and post-rationalize the buying decisions we want.  Where we need to be: “A bigger boy made me do it” Where we are: “a wizard did it and ran away” However, it’s worth bearing in mind that some purchasing decisions are personal and informed rather than social and reactive. There’s a quadrant diagram, in fact. What was really interesting, though, towards the end of the talk, was some advice for working out how social your products might be. The standard technology adoption lifecycle graph is essentially about social product diffusion. So this idea isn’t really new. Geoffrey Moore’s “chasm” idea may not strictly apply. However, his concepts of beachheads and reference segments are exactly what is required to normalize and thus enable purchase decisions (behaviour change). The final thing is that in only very few categories does a better product actually affect purchase decision. Where the choice is personal and informed, this is true. But where it’s personal and impulsive, or in any way social, “better” is trumped by popularity, endorsement, or “point of sale salience”. UX, UCD, and e-commerce know this to be true. A better (and easier) experience will always beat “more features”. Easy to use, and easy to observe being used will beat “what the user says they want”. This made me think about the astounding stickiness of rational fallacies, “common sense” and the pathological willful simplifications of the media. Rational fallacies seem like they’re basically the heuristics we use for post-rationalization. If I were profoundly grimy and cynical, I’d suggest deploying a boat-load in our messaging, to see if they’re really as sticky and appealing as they look. 4 – Changing behaviour through communication, Stephen Donajgrodzki This was a fantastic follow up to Mark’s session. Stephen basically talked us through some tactics used in public information/health comms that implement the kind of behavioural theory Mark introduced. The session was largely about how to get people to do (good) things they’re predisposed not to do, and how communication can (and can’t) make positive interventions. A couple of things stood out, in particular “implementation intentions” and how they can be linked to goals. For example, in order to get people to check and test their smoke alarms (a goal intention, rarely actualized  an information campaign will attempt to link this activity to the clocks going back or forward (a strong implementation intention, well-actualized). The talk reinforced the idea that making behaviour changes easy and visible normalizes them and makes them more likely to succeed. To do this, they have to be embodied throughout a product and service cycle. Experiential disconnects undermine the normalization. So campaigns, products, and customer interactions must be aligned. This is underscored by the second section of the presentation, which talked about interventions and pre-conditions for change. Taking the examples of drug addiction and stopping smoking, Stephen showed us a framework for attempting (and succeeding or failing in) behaviour change. He noted that when the change is something people fundamentally want to do, and that is easy, this gets a to simpler. Coordinated, easily-observed environmental pressures create preconditions for change and build motivation. (price, pub smoking ban, ad campaigns, friend quitting, declining social acceptability) A triggering even leads to a change attempt. (getting a cold and panicking about how bad the cough is) Interventions can be made to enable an attempt (NHS services, public information, nicotine patches) If it succeeds – yay. If it fails, there’s strong negative enforcement. Triggering events seem largely personal, but messaging can intervene in the creation of preconditions and in supporting decisions. Stephen talked more about systems of thinking and “bounded rationality”. The idea being that to enable change you need to break through “automatic” thinking into “reflective” thinking. Disruption and emotion are great tools for this, but that is only the start of the process. It occurs to me that a great deal of market research is focused on determining triggers rather than analysing necessary preconditions. Although they are presumably related. The final section talked about setting goals. Marketing goals are often seen as deriving directly from business goals. However, marketing may be unable to deliver on these directly where decision and behaviour-change processes are involved. In those cases, marketing and communication goals should be to create preconditions. They should also consider priming and norms. Content marketing and brand awareness are good first steps here, as brands can be heuristics in decision making for choice-saturated consumers, or those seeking education. 5 – The power of engaged communities and how to build them, Harriet Minter (the Guardian) The meat of this was that you need to let communities define and establish themselves, and be quick to react to their needs. Harriet had been in charge of building the Guardian’s community sites, and learned a lot about how they come together, stabilize  grow, and react. Crucially, they can’t be about sales or push messaging. A community is not just an audience. It’s essential to start with what this particular segment or tribe are interested in, then what they want to hear. Eventually you can consider – in light of this – what they might want to buy, but you can’t start with the product. A community won’t cohere around one you’re pushing. Her tips for community building were (again, sorry, not verbatim): Set goals Have some targets. Community building sounds vague and fluffy, but you can have (and adjust) concrete goals. Think like a start-up This is the “lean” stuff. Try things, fail quickly, respond. Don’t restrict platforms Let the audience choose them, and be aware of their differences. For example, LinkedIn is very different to Twitter. Track your stats Related to the first point. Keeping an eye on the numbers lets you respond. They should be qualified, however. If you want a community of enterprise decision makers, headcount alone may be a bad metric – have you got CIOs, or just people who want to get jobs by mingling with CIOs? Build brand advocates Do things to involve people and make them awesome, and they’ll cheer-lead for you. The last part really got my attention. Little bits of drive-by kindness go a long way. But more than that, genuinely helping people turns them into powerful advocates. Harriet gave an example of the Guardian engaging with an aspiring journalist on its Q&A forums. Through a series of serendipitous encounters he became a BBC producer, and now enthusiastically speaks up for the Guardian community sites. Cultivating many small, authentic, influential voices may have a better pay-off than schmoozing the big guys. This could be particularly important in the context of Mark and Stephen’s models of social, endorsement-led, and example-led decision making. There’s a lot here I haven’t covered, and it may be worth some follow-up on community building. Thoughts I was quite sceptical of nudge theory and behavioural economics. First off it sounds too good to be true, and second it sounds too sinister to permit. But I haven’t done the background reading. So I’m going to, and if it seems to hold real water, and if it’s possible to do it ethically (Stephen’s presentations suggests it may be) then it’s probably worth exploring. The message seemed to be: change what people do, and they’ll work out why afterwards. Moreover, the people around them will do it too. Make the things you want them to do extraordinarily easy and very, very visible. Normalize and support the decisions you want them to make, and they’ll make them. In practice this means not talking about the thing, but showing the user-awesome. Glib? Perhaps. But it feels worth considering. Also, if I ever run a marketing conference, I’m going to ban speakers from using examples from Apple. Quite apart from not being consistently generalizable, it’s becoming an irritating cliché.

    Read the article

  • Trouble with MySQL: CONCAT_WS(' ', name_first, name_middle, name_last) like '%keyword%'

    - by AJB
    hey folks, I'm setting up a keyword search across multiple fields: name_first, name_middle, name_last but I'm not getting the results I'd like. Here's the query: "SELECT accounts_users.user_ID, users.name_first, users.name_middle, users.name_last, users.company FROM accounts_users, users WHERE accounts_users.account_ID = '$account_ID' AND accounts_users.user_ID = users.id AND CONCAT_WS(' ', users.name_first, users.name_middle, users.name_last) LIKE '$user_keyword%' ORDER BY users.name_first ASC" So, if I've got three names in the DB: Aaron J Ban Aaron J Can Bob L Lawblaw And if the user_keyword == "bob lawblaw" I get no result. If user_keyword == "bob L" then it returns Bob L Lawblaw. Obviously I can't force people to include the persons middle name in their keyword search but I'm stuck for the proper way to do this. All help is greatly appreciated.

    Read the article

  • Problem when loading image from google chart api URL

    - by user304839
    I want to show concentric charts of google api in iphone, but the imageData varible returns null value and image does not loaded from url to image view this is code: NSData *imageData = [[NSData alloc] initWithContentsOfURL:[NSURL URLWithString:@"http://chart.apis.google.com/chart?cht=pc&chd=t:120,45|120,60,50,70,60&chs=300x200&chl=||helo|wrd|india|pak|ban&chco=FFFFFF|FFFFFF,e72a28|a9d331|ffce08|8a2585|184a7d"]]; NSLog(@"%@",imageData); UIImage *myimage = [[UIImage alloc] initWithData:imageData]; self.myImageView.image=myimage; please any one help me to overcome on this problem

    Read the article

  • Applescript access to last.fm app via application icon in menu bar

    - by Mark
    Hi, I want to create an Applescript to drive the last.fm player app. I'm trying to do this via last.fm application icon in the menu bar rather than using the main application menus, as this approach (I think) won't cause last.fm to switch to the foreground. The overall plan is to bind my script to a quicksilver trigger so I can stop|start|skip|love|ban|tag tracks from the keyboard. My problem is I can't find what UI element to bind the applescript to. I've used UI Browser to scan through the UI object model but it draws a blank with the last.fm icon in the menu bar. Any thoughts appreciated.

    Read the article

  • Prevent IE users from visiting my site?

    - by Paul Hatcherian
    Internet Explorer has caused me a lot of trouble over the years, between security problems, memory leaks, endless CSS and JavaScript hacks to get my site to look correct, and inconsistencies between releases, I've spent countless hours as the hapless victim of IE's idiosyncrasies. Well that ends today, I've decided to take matters into my own hands and ban all users of IE from visiting my website. That will teach them to use such a cruddy browser. My question is how best to do this? I don't want to rely on JavaScript, which could be disabled, nor the request agent string, which could be tampered with. A clever user could even temporarily switch to Firefox or Chrome just to visit my site. Ideally, I'd have a list of the IP addresses of every IE user in the world and restrict based on the IP address. The main problem I'm having, aside from getting the list in the first place, is how do I keep it updated? Thanks!

    Read the article

  • Optimize php-fpm and varnish for a powerfull server

    - by Jim
    My setup is: Intel® Core™ i7-2600 and RAM 16 GB DDR3 RAM varnish+nginx+php-fpm+apc for a not very heavy WordPress blog with W3 Total Cache and CDN My problem is that after 55 hits per second according to blitz.io varnish starts giving out timeouts. CPU usage at this time is hardly 1%. Free memory at all time remains 10GB+. I tried benchmarking php-fpm directly with result of 150hits/s without any timeouts. But after that the CPU usage goes 100% and it stops responding. Can you help me optimize it to handle more? As i understand nginx has nothing to do over here so i dont put its config. php-fpm config listen = /tmp/php5-fpm.sock listen.allowed_clients = 127.0.0.1 user = nginx group = nginx pm = dynamic pm.max_children = 150 pm.start_servers = 7 pm.min_spare_servers = 2 pm.max_spare_servers = 15 pm.max_requests = 500 slowlog = /var/log/php-fpm/www-slow.log php_admin_value[error_log] = /var/log/php-fpm/www-error.log php_admin_flag[log_errors] = on apc extension = apc.so apc.enabled=1 apc.shm_size=512MB apc.num_files_hint=0 apc.user_entries_hint=0 apc.ttl=7200 apc.use_request_time=1 apc.user_ttl=7200 apc.gc_ttl=3600 apc.cache_by_default=1 apc.filters apc.mmap_file_mask=/tmp/apc.XXXXXX apc.file_update_protection=2 apc.enable_cli=0 apc.max_file_size=1M apc.stat=1 apc.stat_ctime=0 apc.canonicalize=0 apc.write_lock=1 apc.report_autofilter=0 apc.rfc1867=0 apc.rfc1867_prefix =upload_ apc.rfc1867_name=APC_UPLOAD_PROGRESS apc.rfc1867_freq=0 apc.rfc1867_ttl=3600 apc.include_once_override=0 apc.lazy_classes=0 apc.lazy_functions=0 apc.coredump_unmap=0 apc.file_md5=0 apc.preload_path Varnish VCL backend default { .host = "127.0.0.1"; .port = "8080"; .connect_timeout = 6s; .first_byte_timeout = 6s; .between_bytes_timeout = 60s; } acl purgehosts { "localhost"; "127.0.0.1"; } # Called after a document has been successfully retrieved from the backend. sub vcl_fetch { # Uncomment to make the default cache "time to live" is 5 minutes, handy # but it may cache stale pages unless purged. (TODO) # By default Varnish will use the headers sent to it by Apache (the backend server) # to figure out the correct TTL. # WP Super Cache sends a TTL of 3 seconds, set in wp-content/cache/.htaccess set beresp.ttl = 24h; # Strip cookies for static files and set a long cache expiry time. if (req.url ~ "\.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|pdf|txt|tar|wav|bmp|rtf|js|flv|swf|html|htm)$") { unset beresp.http.set-cookie; set beresp.ttl = 24h; } # If WordPress cookies found then page is not cacheable if (req.http.Cookie ~"(wp-postpass|wordpress_logged_in|comment_author_)") { # set beresp.cacheable = false;#versions less than 3 #beresp.ttl>0 is cacheable so 0 will not be cached set beresp.ttl = 0s; } else { #set beresp.cacheable = true; set beresp.ttl=24h;#cache for 24hrs } # Varnish determined the object was not cacheable #if ttl is not > 0 seconds then it is cachebale if (!beresp.ttl > 0s) { # set beresp.http.X-Cacheable = "NO:Not Cacheable"; } else if ( req.http.Cookie ~"(wp-postpass|wordpress_logged_in|comment_author_)" ) { # You don't wish to cache content for logged in users set beresp.http.X-Cacheable = "NO:Got Session"; return(hit_for_pass); #previously just pass but changed in v3+ } else if ( beresp.http.Cache-Control ~ "private") { # You are respecting the Cache-Control=private header from the backend set beresp.http.X-Cacheable = "NO:Cache-Control=private"; return(hit_for_pass); } else if ( beresp.ttl < 1s ) { # You are extending the lifetime of the object artificially set beresp.ttl = 300s; set beresp.grace = 300s; set beresp.http.X-Cacheable = "YES:Forced"; } else { # Varnish determined the object was cacheable set beresp.http.X-Cacheable = "YES"; if (beresp.status == 404 || beresp.status >= 500) { set beresp.ttl = 0s; } # Deliver the content return(deliver); } sub vcl_hash { # Each cached page has to be identified by a key that unlocks it. # Add the browser cookie only if a WordPress cookie found. if ( req.http.Cookie ~"(wp-postpass|wordpress_logged_in|comment_author_)" ) { #set req.hash += req.http.Cookie; hash_data(req.http.Cookie); } } # vcl_recv is called whenever a request is received sub vcl_recv { # remove ?ver=xxxxx strings from urls so css and js files are cached. # Watch out when upgrading WordPress, need to restart Varnish or flush cache. set req.url = regsub(req.url, "\?ver=.*$", ""); # Remove "replytocom" from requests to make caching better. set req.url = regsub(req.url, "\?replytocom=.*$", ""); remove req.http.X-Forwarded-For; set req.http.X-Forwarded-For = client.ip; # Exclude this site because it breaks if cached if ( req.http.host == "sr.ituts.gr" ) { return( pass ); } # Serve objects up to 2 minutes past their expiry if the backend is slow to respond. set req.grace = 120s; # Strip cookies for static files: if (req.url ~ "\.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|pdf|txt|tar|wav|bmp|rtf|js|flv|swf|html|htm)$") { unset req.http.Cookie; return(lookup); } # Remove has_js and Google Analytics __* cookies. set req.http.Cookie = regsuball(req.http.Cookie, "(^|;\s*)(__[a-z]+|has_js)=[^;]*", ""); # Remove a ";" prefix, if present. set req.http.Cookie = regsub(req.http.Cookie, "^;\s*", ""); # Remove empty cookies. if (req.http.Cookie ~ "^\s*$") { unset req.http.Cookie; } if (req.request == "PURGE") { if (!client.ip ~ purgehosts) { error 405 "Not allowed."; } #previous version ban() was purge() ban("req.url ~ " + req.url + " && req.http.host == " + req.http.host); error 200 "Purged."; } # Pass anything other than GET and HEAD directly. if (req.request != "GET" && req.request != "HEAD") { return( pass ); } /* We only deal with GET and HEAD by default */ # remove cookies for comments cookie to make caching better. set req.http.cookie = regsub(req.http.cookie, "1231111111111111122222222333333=[^;]+(; )?", ""); # never cache the admin pages, or the server-status page, or your feed? you may want to..i don't if (req.request == "GET" && (req.url ~ "(wp-admin|bb-admin|server-status|feed)")) { return(pipe); } # don't cache authenticated sessions if (req.http.Cookie && req.http.Cookie ~ "(wordpress_|PHPSESSID)") { return(lookup); } # don't cache ajax requests if(req.http.X-Requested-With == "XMLHttpRequest" || req.url ~ "nocache" || req.url ~ "(control.php|wp-comments-post.php|wp-login.php|bb-login.php|bb-reset-password.php|register.php)") { return (pass); } return( lookup ); } Varnish Daemon options DAEMON_OPTS="-a :80 \ -T 127.0.0.1:6082 \ -f /etc/varnish/ituts.vcl \ -u varnish -g varnish \ -S /etc/varnish/secret \ -p thread_pool_add_delay=2 \ -p thread_pools=8 \ -p thread_pool_min=100 \ -p thread_pool_max=1000 \ -p session_linger=50 \ -p session_max=150000 \ -p sess_workspace=262144 \ -s malloc,5G" Im not sure where to start, should i for start optimize php-fpm and then go to varnish or php-fpm is at its max right now so i should start looking for the problem in varnish?

    Read the article

  • What is the email limit on Google Apps Script?

    - by jmvidal
    Can someone tell me if there is a webpage that lists the official Google limit on emails sent from a Google Apps Script? In testing my little script I got a Service invoked too many times: email (# 59) and now I can't send any more emails. The obvious place for this information would be in the MailApp.sendEmail documentation. But, that does not say anything about a limit. I found this discussion on the google forum from 2/11/10 where users discuss about a 100 or 500 emails/day limit, with a 24 hour ban, but no one from Google provided an official answer. Note that this is for google apps script, which is different from the google app engine, which does have well published limits.

    Read the article

  • RESTful membership

    - by FoxDemon
    I am currentlly trying to design a RESTful MembershipsController. The controller action update is used only for promoting, banning, approving,... members. To invoke the update action the URL must contain a Parameter called type with the appropriate value. I am not too sure if that is really RESTful design. Should I rather introduce sepearate actions for promoting,... members? class MembershipsController < ApplicationController def update @membership= Membership.find params[:id] if Membership.aasm_events.keys.include?(params[:type].to_sym) #[:ban, :promote,...] @membership.send("#{params[:type]}!") render :partial => 'update_membership' end end end

    Read the article

  • A quick over view of facebook's db?

    - by Matt
    Hey guys I find it hard to believe that Facebook uses simple sql, surely it would use some other method but lets assume for now it does use sql how would the code assimilating the 'wall' work? Lets say that there is three tables (just for the example) Friends: id (entry key) - uid(your id) - fid (your mates' id) Wall:id (entry key) - username - comment - time - commentcount comments: id (entry key) - wid (wall id (original comment)) - reply - time Lets forget about the like part and report etc, as well as mod things (ip, ban etc.) How would this work? Select wall.id, wall.username, wall.comment, wall.time, wall.commentcount, comments.wid, comments.reply, comments.time FROM wall inner join comments ON wall.id=comments.wid ORDER BY wall.time; That's your own wall but how do they get friend's? A heap of unions?

    Read the article

  • Python: Find X to Y in a list of strings.

    - by TheLizardKing
    I have a list of maybe a 100 or so elements that is actually an email with each line as an element. The list is slightly variable because lines that have a \n in them are put in a separate element so I can't simply slice using fixed values. I essentially need a variable start and stop phrase (needs to be a partial search as well because one of my start phrases might actually be Total Cost: $13.43 so I would just use Total Cost:.) Same thing with the end phrase. I also do not wish to include the start/stop phrases in the returned list. In summary: email = ['apples','bananas','cats','dogs','elephants','fish','gee'] start = 'ban' stop = 'ele' the magic here new_email = ['cats','dogs'] NOTES While not perfect formatting of the email, it is fairly consistent so there is a slim chance a start/stop phrase will occur more than once. There are also no blank elements.

    Read the article

  • Jquery Slider PNG black borders IE8

    - by Thomas
    Greetings, I'm having a lot of trouble with the IE8 buy of getting black borders when using a JQUERY slider, with PNG transparent images. Using a slightly modified version of the Nivio slider. I have searched high and low for fixes and blocks of code but so far none have worked. What happens is that as soon as the img cycles it gets the black border and looks like shit (only in IE8). Does anyone know a working fix for this? Or do we just have to ban IE from all computers?

    Read the article

  • varnish invalidate url REGEX from backend

    - by ooouuiii
    Say I have some highly-visited front-page, which displays number of some items by categories. When some item is added / deleted I need to invalidate this front-page/url and some 2 others. What is the best practice how to invalidate those urls from backend in Varnish (4.x)? From what I captured, I can: implement my HTTP PURGE handler in VCL configuration file, that "bans" urls matching received regex from backend to Varnish, send 3x HTTP PURGE requests for those 3 urls. But is this approach safe for this automatic usage? Basicly I need to invalidate some views everytime some related entity is inserted/updated/deleted. Can it lead to ban list cumulation and increasing CPU consumption? Is there any other approach? Thanks.

    Read the article

< Previous Page | 1 2 3 4 5 6 7  | Next Page >