Search Results

Search found 9960 results on 399 pages for 'iwork pages'.

Page 288/399 | < Previous Page | 284 285 286 287 288 289 290 291 292 293 294 295  | Next Page >

  • As a team should we develop locally and merge into the dev server, or develop on the dev server?

    - by CogitoErgoSum
    Hey, Recently I was tasked with writing up formal procedures for a team based development enviroment. We have several projects with multiple modules each. Right now there are only two programmers, however there are plans to expand to 4-6 programmers. Each programmer will be working on the same project and possibly pages which may cause over writing or error issues. So far the ideal solution I have thought up is: Local development (WAMP/VM or some virtual server instance on their own machine). Once a developer has finished their developments, they check it into the CVS Repository and merge it wih other fixes etc. The CVS version is then deployed to the primary dev server for testing by the devs. The MySQL DAtabases are kept on the primary dev server and users may remotely connect to it. Any Schema / Data alterations are run through a DB Admin who will notify all devs of any DB Changes (Which should be rare). Does anyone see an issue with this or have a better solution?

    Read the article

  • generating dynamic word documents for mass mailing

    - by bluesystem
    I need to generate a mass mailing based on a word document model with PHP. Given is a database with the adresses and the data that need to be filled in my word model. I want to generate a single word document with the different adresses and field contents from the database. We have a Linux server and the COM object is not avalaible. Is there a ready to use class to do this? Did you had any experiance with PHPWord? What is the best practice in this case? In the ideal case the client should just upload th word master document, with the fields that need to be filled and then fusioned to a multiple pages word document containing the whole mailing.

    Read the article

  • Custom Error mode in Web.Config File

    - by Zerotoinfinite
    Hi All, I have deployed my application on server and Now I am getting this error: To enable the details of this specific error message to be viewable on remote machines, please create a tag within a "web.config" configuration file located in the root directory of the current web application. This tag should then have its "mode" attribute set to "Off". Notes: The current error page you are seeing can be replaced by a custom error page by modifying the "defaultRedirect" attribute of the application's configuration tag to point to a custom error page URL. ~~~ I have defined custom error pages for my applicatio. Please help me, how to rectify this issue. Thanks in advance.

    Read the article

  • How can I ensure that JavaScript inserted via AJAX will be executed after the accompanying HTML (als

    - by RenderIn
    I've got portions of pages being replaced with HTML retrieved via AJAX calls. Some of the HTML coming back has JavaScript that needs to be run once in order to initialize the accompanying HTML (setting up event handlers). Since the document has already been loaded, when I replace chunks of HTML using jQuery's .html function, having jQuery(document).ready(function() {...}); doesn't execute since the page loaded long before and this is just a snippet of HTML being replaced. What's the best way to attach event handlers whose code is packaged along with the HTML it's interested in, when that content is loaded via AJAX? Should I just put a procedural block of javascript after the HTML , so that when I insert the new HTML block, jQuery will execute the javascript immediately? Is the HTML definitely in the DOM and ready to be acted upon by JavaScript which is in the same .html call?

    Read the article

  • 'SHA1' is deprecated: first deprecated in OS X 10.7?

    - by sukhvir
    So I was trying to compile a code which has a SHA1 function .. I included following header: #include <openssl/sha.h> And I got the following error while compiling: test.c:9:5: error: 'SHA1' is deprecated: first deprecated in OS X 10.7 [-Werror,-Wdeprecated-declarations] SHA1(msg, strlen(msg), hs); ^ But man pages still have the descriptions for that function. Can anyone suggest any other header for a similar function ( MD5 or SHA1 )? PS - also do I need to link any libraries while compiling using gcc?

    Read the article

  • Paging a UIScrollView with a large PDF

    - by Fousa
    I try to create a simple UIScrollView with paging. And I want to be able to scroll through a large PDF document, but this gives me some problems... I tried the following options: Convert all the PDF pages to UIImages at startup, this works, but is very slow on start Manually drawing the PDF page in the drawRect, but yet again this was slow... And I prefer not to load everything at startup but to do it during the usage. Did anyone did this recently? Can't seem to find a nice example project. Thnx! Jelle

    Read the article

  • Remove objects from different environments

    - by Fred
    I have an R script file that executes a second R script via: source("../scripts/second_file.R") That second file has the following lines: myfiles <- list.files(".",pattern = "*.csv") ... rm(myfiles) When I run the master R file I get: > source("../scripts/second_file.R") Error in file.remove(myfiles) : object 'myfiles' not found and the program aborts. I think this has something to do with the environment. I looked at ?rm() pages but less than illuminating. I figure I have to give it a position argument, but not sure which.

    Read the article

  • PHP if statement - select two different get variables?

    - by arsoneffect
    Below is my example script: <li><a <?php if ($_GET['page']=='photos' && $_GET['view']!=="projects"||!=="forsale") { echo ("href=\"#\" class=\"active\""); } else { echo ("href=\"/?page=photos\""); } ?>>Photos</a></li> <li><a <?php if ($_GET['view']=='projects') { echo ("href=\"#\" class=\"active\""); } else { echo ("href=\"/?page=photos&view=projects\""); } ?>>Projects</a></li> <li><a <?php if ($_GET['view']=='forsale') { echo ("href=\"#\" class=\"active\""); } else { echo ("href=\"/?page=photos&view=forsale\""); } ?>>For Sale</a></li> I want the PHP to echo the "href="#" class="active" only when it is not on the two pages: ?page=photos&view=forsale or ?page=photos&view=projects

    Read the article

  • Can I improve performance by refactoring SQL commands into C# classes?

    - by Matthew Jones
    Currently, my entire website does updating from SQL parameterized queries. It works, we've had no problems with it, but it can occasionally be very slow. I was wondering if it makes sense to refactor some of these SQL commands into classes so that we would not have to hit the database so often. I understand hitting the database is generally the slowest part of any web application For example, say we have a class structure like this: Project (comprised of) Tasks (comprised of) Assignments Where Project, Task, and Assignment are classes. At certain points in the site you are only working on one project at a time, and so creating a Project class and passing it among pages (using Session, Profile, something else) might make sense. I imagine this class would have a Save() method to save value changes. Does it make sense to invest the time into doing this? Under what conditions might it be worth it?

    Read the article

  • XML XHR Request resultin in 0 stauts and empty response text.

    - by deepak
    I had another post for the same problem... I think I put the question in a wrong way.. Let me give more details: i have test.html in my c:\ drive and I have a local webserver runnin in qt, and i have some plugin written to that webserver which will get the request and send some response text "hello". in test.html i m making a xml xhr request which will make a GET request like localhost:8080/test which will return the text "hello" by that plugin. Now if I directly open test.html from C:\ it doesnt work, i mean i get response 4 and status 0, and response text nothing.. but the request is passing through webserver and plugin It works fine, when the test.html is put in the webserver pages directory.

    Read the article

  • Don't Change URL in Browser When Clicking <asp:LinkButton>

    - by Corey Goldberg
    I have an ASP.NET page that uses a menu based on asp:LinkButton control in a Master page. When a user selects a menu item, an onclick handler calls a method in my C# code. The method it calls just does a Server.Transfer() to a new page. From what I have read, this is not supposed to change the URL displayed in the browser. The problem is it that the URL changes in the browser as the user navigates the menu to different pages. Here is an item in the menu: <asp:LinkButton id="foo" runat="server" onclick="changeToHelp"><span>Help</span> </asp:LinkButton> In my C# code, I handle the event with a method like: protected void changeToHelp(object sender, EventArgs e) { Server.Transfer("Help.aspx"); } Any ideas how I can navigate through the menu without the browser's URL bar changing?

    Read the article

  • Free sounds for iPhone games.

    - by Roger Gilbrat
    I'm looking for a good source for sound effects for my iPhone game. I like SoundSnap, but they charge you for every sound you download, not just the ones you end up using. Sound design in games can be a very iterative process and I don't want to pay for 10 sounds I never use before finding the right one. freesound.org is another really good site, but they are all CC licensed and can't be used in commercial games. It's unclear if a free iPhone App is considered commercial (or if my final game will be free). Googling for this returns a huge number of pay sites with horrible web pages. I don't mind paying for sounds, but I want to pay for what I use. Any good personal recommendations?

    Read the article

  • Anonymous users support vs Google bot

    - by Andy
    I have a User class in my web app that represents a user currently logged in. Every time a user vists a page, a User instance is populated based on authentication data supplied in cookies. A User instance is created even if an anonymous user logs in - and a corresponding new record is created in the User table in the database. This approach allows me to save some state info for the current user regardless of its type. The problem however with this approach is the Google bot, and other non-human web organisms crawling my pages. Every time a bot starts to walk around the site, thousands of useless records will be created in the database, each of them only to be used for a single page. Question: what is the best trade off? How to support anonymous users, save their state, and don't get too much overhead because of cookieless bots?

    Read the article

  • How do I load the background image from another page?

    - by bbeckford
    Hi all, I'm creating a page that loads content from other pages using jQuery like this: $('#newPage').load('example.html' + ' #pageContent', function() { loadComplete(); }); That all works fine. Now what I want to do is change the background image of the current page to the background image of the page I'm loading from. This is what I'm doing now but I can't for the life of me get it to work: $.get('example.html', function(data) { var pageHTML = $(data); var pageBody = pageHTML.$('body'); alert(pageBody.attr("background")); }); What am I doing wrong?? Thanks, -Ben

    Read the article

  • WebClient.DownloadString() Not Producing Exact HTML

    - by Ryan Fuentes
    So here's the deal. I'm creating a spider bot for a website that scans all the product pages and records the product data. I'm using C# and the WebClient library to download the HTML string. The site I'm crawling must be specially made because the HTML that is received from WebClient.DownloadString() is different than the HTML that I get when I view the source of the HTML when visiting it on a browser. This seems intentional because the only info I can't get is the price. Does anyone know a workaround for this problem or can anyone explain what is happening? Thanks.

    Read the article

  • trigger click behaviour of image-map's area

    - by Amit
    I have a image map set up and each area in the image map has a href defined. the href on area contains urls to other pages in my application. i generate a have a small ul which lists down name attribute of the area tag. i want the dynamically generated ul/lis to imitate click behaviour of area tag. for this, i have the following jquery set up - $('li').click(function(e){ $('area[name='+$(this).html()+']').trigger('click'); }); but the above works well only in ie6+. ff does not fire the click event. i also tried the click() variant but to no avail. looking forward for some help. Thanks :)

    Read the article

  • How to override JS function from a firefox extension?

    - by BruceBerry
    Hello, I am trying to intercept calls to document.write for all pages. Setting up the interception inside the page by injecting a script like function overrideDocWrite() { alert("Override called"); document.write = function(w) { return function(s) { alert("special dom"); w.call(this, wrapString(s)); }; }(document.write); alert("Override finished"); } is easy and works, but i would like my extension to setup the interception for each document object from inside the extension. I couldn't find a way to do this. I tried to listen for the "load" event and set up the interception there but it also fails. How do I hook calls to doc.write from an extension?

    Read the article

  • [LaTex]: Add in the TOC an included PDF

    - by ILoveMyLatexReport
    In my document I include a PDF using \includepdf[pages=-]{./mypdf.pdf} The problem I'm having is how to add a TOC entry for this pdf. It supposed to be an appendix. I tried adding a new section in the appendix but of course the section name can't be printed on the same page than the included pdf, so the resulting TOC line directs to a wrong page. if I use \addcontentsline I loose the numbering and the page is wrong too because the included pdf actually starts at the next page... I'm a bit lost here so I would really appreciate if someone knows how to do this. note: the pdf I try to include was not generated from LaTex. Thanks in advance.

    Read the article

  • mySQL php query - news/ friends feed

    - by rpsep2
    I want to show a user the recent uploads from their friends. I have the users friends id's in an array: $friends A user could have, potentially, thousands of friends. I can select the uploads from 1 of a users friends with: $row = $mysqli->query("SELECT * FROM photos WHERE uploader_id = ".$friend." ORDER BY date_uploaded DESC LIMIT ".$page.", 25"); But I need to find all of a users friends uploads. I thought about doing this in a loop iterating over the $friends array, but then I'd be potentially running thousands of mysql queries. How can I do this most efficiently? so to clarify: search a 'photos' table for photos which are uploaded by specific users(friends), held in $friends variable, sort by date_uploaded and limit to x results so I can have pages 1, 2, 3 etc.

    Read the article

  • BufferedReader ready method in a while loop to determine EOF?

    - by BobTurbo
    I have a large file (wikipedia english arcticles only database as xml file) I am using to read one character at a time using BufferedReader. The psuedo code is: file = BufferedReader... while (file.ready()) character = file.read() is this actually valid? Or will ready just return false when it is waiting for the HDD to return data and not actually when the EOF has been reached? I tried to use if (file.read() == -1) but seemed to run into an infinite loop that I literally could not find. I am just wondering if it is reading the whole file as my statistics say 444 380 wikipedia pages have been read but I thought there were many more articles..

    Read the article

  • how to get apache mod_cache work with mod_wsgi (django)?

    - by harmv
    I thought i'd speed up my django projects, by letting apache doing some caching for me. Unfortunately I see that apache never caches my dynamic pages. Has mod_cache problems with mod_wsgi served code ? My apache config: <VirtualHost *:80 ServerName myserver.com CacheEnable mem / # for testing only CacheIgnoreQueryString On CacheIgnoreCacheControl On WSGIDaemonProcess aname processes=1 threads=25 WSGIProcessGroup aname Alias /media/ /home/harm/projects/test/media/ WSGIScriptAlias / /home/harm/projects/test/wsgi.py The response does have the correct caching headers: Content-Length 2647 Content-Encoding gzip Vary Accept-Encoding Cache-Control public, max-age=3600 Keep-Alive timeout=15, max=100 Connection Keep-Alive Content-Type application/x-javascript Am I missing something ?

    Read the article

  • What is the best way to mock a 3rd party object in ruby?

    - by spinlock
    I'm writing a test app using the twitter gem and I'd like to write an integration test but I can't figure out how to mock the objects in the Twitter namespace. Here's the function that I want to test: def build_twitter(omniauth) Twitter.configure do |config| config.consumer_key = TWITTER_KEY config.consumer_secret = TWITTER_SECRET config.oauth_token = omniauth['credentials']['token'] config.oauth_token_secret = omniauth['credentials']['secret'] end client = Twitter::Client.new user = client.current_user self.name = user.name end and here's the rspec test that I'm trying to write: feature 'testing oauth' do before(:each) do @twitter = double("Twitter") @twitter.stub!(:configure).and_return true @client = double("Twitter::Client") @client.stub!(:current_user).and_return(@user) @user = double("Twitter::User") @user.stub!(:name).and_return("Tester") end scenario 'twitter' do visit root_path login_with_oauth page.should have_content("Pages#home") end end But, I'm getting this error: 1) testing oauth twitter Failure/Error: login_with_oauth Twitter::Error::Unauthorized: GET https://api.twitter.com/1/account/verify_credentials.json: 401: Invalid / expired Token # ./app/models/user.rb:40:in `build_twitter' # ./app/models/user.rb:16:in `build_authentication' # ./app/controllers/authentications_controller.rb:47:in `create' # ./spec/support/integration_spec_helper.rb:3:in `login_with_oauth' # ./spec/integration/twit_test.rb:16:in `block (2 levels) in <top (required)>' The mocks above are using rspec but I'm open to trying mocha too. Any help would be greatly appreciated.

    Read the article

  • Fix internal links in JS

    - by FB55
    I just created a script which extracts the article out of a webpage via server-side JS. (If your interested: it's used for http://pipes.yahoo.com/fb55/expandr .) I just got a little problem with internal links. Some pages include links like: /subfolder/subpage.html What I would need to do is fixing them and setting there root, like this: protocol://secondlevel.firstlevel/subfolder/subpage.html I'm using E4X for processing the page. I don't want to show my current creepy try, it's buggy and slow. Does anybody have a solution for me?

    Read the article

  • c# creating a file from input stream

    - by daemonkid
    My component will recieve a pdf file as a filestream from which I will need to create a file. For testing purposes I am trying to read a file using the filestream object and recreate it at a different location. But the recreated file is created blank. the recreated file has the same number of pages though... This is the code StreamReader sr = new StreamReader(_filePath); str = sr.ReadToEnd(); File.WriteAllText(@"C:\recreated.pdf", str); what am I doing wrong? Thanks for your time.

    Read the article

  • Employer wants direct, no-hack, ie6 support for CSS. Should I talk him out of it?

    - by DavidR
    I'm currently employed by a website Designer, he gets the clients and sends me a mockup in a fireworks file, and I send him the html/css/js. The problem is that he wants direct ie6 compatibility for every site I build. That is, no conditional ie6 hack, no separate style sheets. A lot of my html has suffered because of it. I just started writing html with him last summer, he took me in as an intern and taught me everything about it. Since then I built 4 web pages, but I haven't yet made anything I'm really proud of. Should I be trying harder to create stellar code beside my limitations or should I set him down and explain that his demands are killing the code for modern browsers?

    Read the article

< Previous Page | 284 285 286 287 288 289 290 291 292 293 294 295  | Next Page >