Search Results

Search found 60978 results on 2440 pages for 'web development'.

Page 324/2440 | < Previous Page | 320 321 322 323 324 325 326 327 328 329 330 331  | Next Page >

  • Alternatives for saving data with jquery

    - by Phil Vallone
    I am not sure if this question is considered too broad, but I would like to reach out to my fellow programmers to see what alternatives are out there for saving data using jquery. I have a content management system that generates an set of HTML pages called an IETM (Interactive Electronic Technical Manual). The HTML pages are written in HTML and uses jquery. The ITEM is meant to be light weight, portable and run on most modern browsers. I am looking for a way to save data. I have considered cookies and sqlite. Are there any other alternatives for saving data using jquery?

    Read the article

  • Debugging logrotate postrotate script

    - by robert
    Following is my logrotate conf. /mnt/je/logs/apache/jesites/web/*.log" { missingok rotate 0 size 5M copytruncate notifempty sharedscripts postrotate /home/bitnami/.conf/compress-and-upload.sh /mnt/je/logs/apache/jesites/web/ web endscript } And compress-and-upload.sh script, #!/bin/sh # Perform Rotated Log File Compression tar -czPf $1/log.gz $1/*.1 # Fetch the instance id from the instance EC2_INSTANCE_ID="`wget -q -O - http://169.254.169.254/latest/meta-data/instance-id`" if [ -z $EC2_INSTANCE_ID ]; then echo "Error: Couldn't fetch Instance ID .. Exiting .." exit; else /usr/local/bin/s3cmd put $1/log.gz s3://xxxx/logs/$(date +%Y)/$(date +%m)/$(date +%d)/$2/$EC2_INSTANCE_ID-$(date +%H:%M:%S)-$2.gz fi # Removing Rotated Compressed Log File rm -f $1/log.gz The files are rotated, but shell script is not executed. I don't know how to debug the postscript. Is there any logfile I chek to see if there is any permission issues. If i directly execute the script from commandline file upload works. Thanks.

    Read the article

  • Creating an encrypted, web-based proxy

    - by Jason
    I have moved to Asia where my internet connection is censored and I'd like to check my messages from social sites which happen to be blocked. As virtually all proxy servers are blocked in this country, I've decided to attempt to roll my own encrypted proxy server. Please note, the key word here is encrypted—if the sniffer sees anything like f@c3b00k or w:k:p3d:ia travelling down the wire I'm had. I have a website hosted with GoDaddy (Windows with PHP 5.2 & IIS 7). Is there any way I can set up an encrypted proxy through this service? If so, how, and what open source tools are available to use?

    Read the article

  • DotNetNuke Web Hosting

    DotNetNuke (DNN) is one of the most sought after Content Management System (CMS) for the net market. It is applied on numerous websites and by several companies from across the globe. What makes this... [Author: scheygensmith - Computers and Internet - March 21, 2010]

    Read the article

  • Recording slow web stream

    - by Budric
    I'm trying to record an mpeg2 video stream from a website that doesn't have the greatest bandwidth. The video often buffers. I want to download the stream and watch it offline. The extract stream format received is: Stream #0.0[0x44]: Audio: mp2, 48000 Hz, stereo, s16, 192 kb/s Stream #0.1[0x45]: Video: mpeg2video (Main), yuv420p, 704x576 [PAR 16:11 DAR 16:9], 15000 kb/s, 27.19 fps, 25 tbr, 90k tbn, 50 tbc I use the following tool to transocde the stream: ffmpeg -i "http://url" -y -vcodec libx264 -b 3000k -acodec copy /tmp/stream.mp4 Unfortunately after a few seconds ffmpeg stops recording with an error [mpegts @ 0x1f0b9c0] PES packet size mismatch [mp2 @ 0x1f14640] incomplete frame Error while decoding stream #0.0 [mpeg2video @ 0x1f16860] ac-tex damaged at 0 26 [mpeg2video @ 0x1f16860] Warning MVs not available I've tried encoding with vlc as well with similar issues. Although vlc doesn't stop encoding, the output video has regions where it hangs. vlc -I dummy "http://url" --network-caching="1000" --sout="#transcode{vcodec=h264,vb=3000,acodec=mp3,ab=192}:std{access=file,mux=mp4,dst=/tmp/stream.mp4}" [mpeg2video @ 0x7f2d4c001e20] ac-tex damaged at 9 33 [mpeg2video @ 0x7f2d4c001e20] Warning MVs not available [mpeg2video @ 0x7f2d4c001e20] concealing 132 DC, 132 AC, 132 MV errors [mpeg2video @ 0x7f2d4c001e20] ac-tex damaged at 16 17 [mpeg2video @ 0x7f2d4c001e20] Warning MVs not available [mpeg2video @ 0x7f2d4c001e20] concealing 836 DC, 836 AC, 836 MV errors libdvbpsi error (PSI decoder): TS discontinuity (received 4, expected 3) for PID 0 I also tried flv transcoding and it shows up with its own set of issues, like output flv file hangs in certain parts. Anyone know what's wrong or how to fix this?

    Read the article

  • What are the correct settings for a subdomain in ZoneEdit?

    - by user99572_is_fine
    I want to create a subdomain for a site hosted by Jimdo (a DIY website builder). Jimdo does not allow subdomains however. I am trying to find a workaround where a subdomain is hosted elsewhere but everything else remains as it is. E.g. I use their email service and I want to keep it. The domain is not hosted by Jimdo, but by a host that allows me to edit my zones. It points to the Jimdo NS. I have independent hosting where I have NS information. This is where I want to host my subdomain. My thinking was that I could use ZoneEdit as a "fork" that allows me to keep using my Jimdo page like before and, at the same time, directs a subdomain to another host. Provided this is possible: Question: How do I configure ZoneEdit CNAME or NS records to forward visitors to my website and my email to my Jimdo mail account while pointing a subdomain to another host?

    Read the article

  • Fake links cause crawl error in Google Webmaster Tools

    - by Itai
    Google reported Crawl Errors last week on my largest site though Webmaster Tools. Here is the message: Google detected a significant increase in the number of URLs that return a 404 (Page Not Found) error. Investigating these errors and fixing them where appropriate ensures that Google can successfully crawl your site's pages. The Crawl Errors list is now full of hundreds of fake links like these causing 16,519 errors so far: Note that my site does not even have a search.html and is not related to any of the terms shown in the above image. Inspecting sources for one of those links, I can see this is not simply an isolated source but a concerted effort: Each of the links has a few to a dozen sources all from different, seemingly unrelated sites. It is completely baffling as to why would someone to spending effort doing this. What are they hoping to achieve? Is this an attack? Most importantly: Does this have a negative effect on my side? Could it negatively impact my ranking? If so, what to do about it? The few linking pages I looked at are full of thousands of links to tons of sites and have no contact information and do not seem like the kind of people who would simply stop if asked nicely! According to Google Webmaster Tools, these errors have appeared in a span of 11 days. No crawl errors were being reported previously.

    Read the article

  • How to point DNS records to Amazon AWS Elastic Beanstalk

    - by Lance
    I just created a new environment with AWS and uploaded a .zip file into elastic beanstalk. I used to have a friend's server host my site instead of GoDaddy so I changed my custom DNS name servers from pointing to my friend's server to GoDaddy's. AWS told me that I need I need to add a CNAME record on GoDaddy and I did. The alias is lance and the host is lance-env.elasticbeanstalk.com. I know that this change can take 24-48 hours to take effect but it's been a day already and when I go to my site, a default page from GoDaddy appears. I'm very new to AWS and would just like to find a way for AWS to host my site other than using Route53.

    Read the article

  • Photoshop, slice, Dreamweaver, web?

    - by Omega
    So I am playing around with Photoshop and Dreamweaver. I have created a site layout, and have used the slice function on it. Next, I saved as html & images. In Dreamweaver, I open such html file and I fill the page with content, links, etc. I have a website and everything, and I would like to use my newly created html page on it. But, obviously, if I copy & paste the html to my website it won't work because it will lack the images. But two things: I can't find the images, and apparently they are a lot. I am sure I am doing a great mistake regarding the images. Can someone help me?

    Read the article

  • Hijacking ASP.NET Sessions

    - by Ricardo Peres
    So, you want to be able to access other user’s session state from the session id, right? Well, I don’t know if you should, but you definitely can do that! Here is an extension method for that purpose. It uses a bit of reflection, which means, it may not work with future versions of .NET (I tested it with .NET 4.0/4.5). 1: public static class HttpApplicationExtensions 2: { 3: private static readonly FieldInfo storeField = typeof(SessionStateModule).GetField("_store", BindingFlags.NonPublic | BindingFlags.Instance); 4:  5: public static ISessionStateItemCollection GetSessionById(this HttpApplication app, String sessionId) 6: { 7: var module = app.Modules["Session"] as SessionStateModule; 8:  9: if (module == null) 10: { 11: return (null); 12: } 13:  14: var provider = storeField.GetValue(module) as SessionStateStoreProviderBase; 15:  16: if (provider == null) 17: { 18: return (null); 19: } 20:  21: Boolean locked; 22: TimeSpan lockAge; 23: Object lockId; 24: SessionStateActions actions; 25:  26: var data = provider.GetItem(HttpContext.Current, sessionId.Trim(), out locked, out lockAge, out lockId, out actions); 27:  28: if (data == null) 29: { 30: return (null); 31: } 32:  33: return (data.Items); 34: } 35: } As you can see, it extends the HttpApplication class, that is because we need to access the modules collection, for the Session module. Use with care!

    Read the article

  • Grapeshot crawler ignoring robots.txt

    - by QF_Developer
    Has anyone come across a crawler called Grapeshot? They are hammering the same page repeatedly on our website. I believe they are looking for ad related keywords, based on previous content ad campaigns. The odd thing is we never ran any such campaigns on the page they are so interested in. We do have only a few pages running AdSense, is this what has attracted Grapeshot? I've added the following declaration to my robots.txt, but they don't seem to be honouring it? User-agent: grapeshot Disallow: / Any ideas on how to block this nuisance crawler? I'm starting to think the best way is by setting up IP rules in IIS?

    Read the article

  • how Computer Networks is related to Web/Desktop Java programming

    - by C4CodeE4Exe
    Being a Java programmer , I am wondering how could my work experience would help me learning networking skills. I know C language is used in network socket programming. I know if one knows how to program in one language its not tough to learn another language. Question is I am not able to find much on networks when it comes to Java(may be my knowledge is limited). Do companies like CISCO,TELUS Inc. rely heavily on programmers with such background.

    Read the article

  • Need a host which supports OSQA

    - by Josip Gòdly Zirdum
    Hi i'm looking to install OSQA and see how it goes I have a great niche which I think may work real well, but till I get a large enough audience I'd like to use shared hosting then move up to a dedicated or vps hosting... Almost all hosts i've looked at don't support something OSQA needs I need relatively cheap shared hosting with cpanel. Any recommendations? It needs to support: Django Python markdown html5lib Python OpenId South

    Read the article

  • Tools for building long-running Javascript webapp

    - by FilipK
    Given my lack of familiarity with such tools, could you suggest what tools / frameworks would be suitable for developing a long-running JavaScript webapp? The webapp would display a constantly updating chart. The updates would come through WebSockets (preferably) or XmlHttpRequest. I know and have written JavaScript with JQuery, but for this task I assume something like backbone.js or ExtJS would be appropriate (or maybe not?).

    Read the article

  • Database Driven Web Application, C# Front-End and F# Back-End meaning

    - by user1473053
    Hi I am an intern working with ASP.NET. My current task is to make a website which will incorporate some jquery viewing features. This project seems to me will be primarily dealing with reading data from a database and making graphs out of them. This will require me to make custom queries from whatever the client is looking at. I think it is going to be what this guy calls an Ad Hoc Query tool My plan for this is to make it a database-driven website. So I can utilize the jquery dynamic viewing capabilities. I stumbled upon the functional programming paradigm and found F#. I read that because of it's functional programming paradigm, it makes it a good language to do asynchronous functions. I read about how you can use this with LINQ to SQL and how easy it is to make queries without actually putting the query language in. I understand the concept of the MVC design pattern. But I don't understand what they mean about C# being the front-end and F# being the back-end. Can someone clarify this to me? Also what are your thoughts about doing this project in this way? Any comments and thoughts are greatly appreciated. I feel as if learning F# will be a great learning experience for me. My guess is that the F# back-end is like the part where it controls the calls to the database. F# is possibly the model part of the design pattern. And C# is the controller. So HTML, Javascript and Jquery stuff will be my View design pattern. Clarify please?

    Read the article

  • REST API at backend and MVC Javascript framework at client side

    - by Prashere
    I am building an online social network. I have finished writing RESTful API service using Django. This will return only JSON response (No HTML will be generated from server side) so that this JSON response can be used to build native smartphone apps. API service being common to all clients. My question is, since there is no HTML response from server side, can the MV* Javascript Frameworks like Angular / Backbone / Ember take care of complete Front-end, right from generating HTML page with CSS?

    Read the article

  • Is Moving Entity Framework objects over a webservice really the best way?

    - by aceinthehole
    I've inherited a .NET project that has close to 2 thousand clients out in the field that need to push data periodically up to a central repository. The clients wake up and attempt to push the data up via a series of WCF webservices where they are passing each entity framework entity as parameter. Once the service receives this object, it preforms some business logic on the data, and then turns around and sticks it in it's own database that mirrors the database on the client machines. The trick is, is that this data is being transmitted over a metered connection, which is very expensive. So optimizing the data is a serious priority. Now, we are using a custom encoder that compresses the data (and decompresses it on the other end) while it is being transmitted, and this is reducing the data footprint. However, the amount of data that the clients are using, seem ridiculously large, given the amount of information that is actually being transmitted. It seems me that entity framework itself may be to blame. I'm suspecting that the objects are very large when serialized to be sent over wire, with a lot context information and who knows what else, when what we really need is just the 'new' inserts. Is using the entity framework and WCF services as we have done so far the correct way, architecturally, of approaching this n-tiered, asynchronous, push only problem? Or is there a different approach, that could optimize the data use?

    Read the article

  • What's the best practice to do SOA exception handling?

    - by sun1991
    Here's some interesting debate going on between me and my colleague when coming to handle SOA exceptions: On one side, I support what Juval Lowy said in Programming WCF Services 3rd Edition: As stated at the beginning of this chapter, it is a common illusion that clients care about errors or have anything meaningful to do when they occur. Any attempt to bake such capabilities into the client creates an inordinate degree of coupling between the client and the object, raising serious design questions. How could the client possibly know more about the error than the service, unless it is tightly coupled to it? What if the error originated several layers below the service—should the client be coupled to those lowlevel layers? Should the client try the call again? How often and how frequently? Should the client inform the user of the error? Is there a user? By having all service exceptions be indistinguishable from one another, WCF decouples the client from the service. The less the client knows about what happened on the service side, the more decoupled the interaction will be. On the other side, here's what my colleague suggest: I believe it’s simply incorrect, as it does not align with best practices in building a service oriented architecture and it ignores the general idea that there are problems that users are able to recover from, such as not keying a value correctly. If we considered only systems exceptions, perhaps this idea holds, but systems exceptions are only part of the exception domain. User recoverable exceptions are the other part of the domain and are likely to happen on a regular basis. I believe the correct way to build a service oriented architecture is to map user recoverable situations to checked exceptions, then to marshall each checked exception back to the client as a unique exception that client application programmers are able to handle appropriately. Marshall all runtime exceptions back to the client as a system exception, along with the stack trace so that it is easy to troubleshoot the root cause. I'd like to know what you think about this? Thank you.

    Read the article

  • Nameservers and migrating a VPS

    - by MeltingDog
    I am primarily a front end developer who has been tasked with upgrading my companies VPS. As far as I understand, this is just the process of obtaining a new VPS with WHM/CPanel and then migrating the existing accounts over to the new VPS, testing the sites out, then pointing the DNS to the new nameserver records. That sounds pretty straightforward. What I am having trouble understanding is how to set up the new nameservers on the new VPS. How do I obtain/establish the new nameserver records for the new, blank VPS?

    Read the article

  • How to find out if my hosting's speed is good enough?

    - by Mert Nuhoglu
    There are lots of different online performance tests: Google PageSpeed Insights iWebTool Speed Test AlertFox Page Load Time WebPageTest Also there are several desktop/client software such as: ping tool YSlow Firebug's Net console Fiddler Http Watch I just want to decide if my hosting provider has a good enough performance or if I need to switch my hosting to another provider. So, which tool should I use to compare my hosting provider with other hosting providers?

    Read the article

  • How to determine the source of a request in a distributed service system?

    - by Kabumbus
    Map/Reduce is a great concept for sorting large quantities of data at once. What to do if you have small parts of data and you need to reduce it all the time? Simple example - choosing a service for request. Imagine we have 10 services. Each provides services host with sets of request headers and post/get arguments. Each service declares it has 30 unique keys - 10 per set. service A: name id ... Now imagine we have a distributed services host. We have 200 machines with 10 services on each. Each service has 30 unique keys in there sets. but now to find to which service to map the incoming request we make our services post unique values that map to that sets. We can have up to or more than 10 000 such values sets on each machine per each service. service A machine 1 name = Sam id = 13245 ... service A machine 1 name = Ben id = 33232 ... ... service A machine 100 name = Ron id = 777888 ... So we get 200 * 10 * 30 * 30 * 10 000 == 18 000 000 000 and we get 500 requests per second on our gateway each containing 45 items 15 of which are just noise. And our task is to find a service for request (at least a machine it is running on). On all machines all over cluster for same services we have same rules. We can first select to which service came our request via rules filter 10 * 30. and we will have 200 * 30 * 10 000 == 60 000 000. So... 60 mil is definitely a problem... I hope to get on idea of mapping 30 * 10 000 onto some artificial neural network alike Perceptron that outputs 1 if 30 words (some hashes from words) from the request are correct or if less than Perceptron should return 0. And I’ll send each such Perceptron for each service from each machine to gateway. So I would have a map Perceptron <-> machine for each service. Can any one tall me if my Perceptron idea is at least “sane”? Or normal people do it some other way? Or if there are better ANNs for such purposes?

    Read the article

< Previous Page | 320 321 322 323 324 325 326 327 328 329 330 331  | Next Page >