Search Results

Search found 28230 results on 1130 pages for 'embedded development'.

Page 764/1130 | < Previous Page | 760 761 762 763 764 765 766 767 768 769 770 771  | Next Page >

  • Pass variable to Google Custom Search Engine

    - by Matt
    Is it possible to pass a search variable into the Google Custom Search Engine that I have embedded on my website? I can get the search engine to work, but I can't pass it a term via POST (it's coming from a search button on other pages of the website) I tried to hack the code I found here: http://code.google.com/apis/ajax/playground/?exp=search#hello_world And this is what I have so far... ($q is the term I am passing to it) <script type="text/javascript"> google.load('search', '1', {language : 'en'}); function OnLoad() { var customSearchControl = new google.search.CustomSearchControl('***my key****'); customSearchControl.setResultSetSize(google.search.Search.FILTERED_CSE_RESULTSET); customSearchControl.draw('cse'); searchControl.execute("$q"); } google.setOnLoadCallback(OnLoad); </script> Thanks

    Read the article

  • Must .aspx files have a page directive?

    - by Keith Bloom
    Around 90% of the pages for our websites have no .Net code embedded in them yet are published as .aspx files. I want these to render as fast as possible so I'm removing as much as I can. Does the .Net page directive have an impact on performance? I am thinking about two factors; the page speed for each GET and what happens when the file changes. The CMS system re-creates each page daily and I'm wondering if this triggers the ASP.Net compilation process.

    Read the article

  • SVN supports historical merges so how is Mercurial better?

    - by radman
    Hi, I'm a long time SVN user and have been hearing a lot of brou ha ha with regard to mercurial and decentralised version control systems in general. The main touted feature that I am aware of is that merging in Mercurial is much easier because it records information for each merge so each successive merge is aware of the previous ones. Now as stated in the red book, in the section to do with merging, SVN already supports this with mergeinfo. Now I have not actually used this feature (although I wanted to, our repo version wasn't recent enough) but is this SVN feature particularly different to what Mercurial offers? For anyone who is not aware the suggested work flow for historical merging in svn is this: branch from the development trunk to do your own thing. Regularly merge changes from trunk into your branch to stay up to date. Merge back when your done with the mergeinfo to smooth the process. Without historical data merging this is a nightmare because the comparison is strictly on the differences in the files and does not take into account the steps taken on the way. So each change in the development trunk puts you further into possible conflict when you merge back. Now what I would like to know is: Does merging using Mercurial provide a significant advantage when compared with mergeinfo in SVN or is this just a lot of hot air about nothing? Has anyone used the mergeinfo feature in SVN and how good is it actually in practice?

    Read the article

  • Uploading a picture to a album using the graph api

    - by kielie
    Hi guys, I am trying to upload an image to a album, but it's not working, here is the code I am using, $uid = $facebook->getUser(); $args = array('message' => $uid); $file_path = "http://www.site.com/path/to/file.jpg"; $album_id = '1234'; $args['name'] = '@' . realpath($file_path); $data = $facebook->api('/'. $album_id . '/photos', 'post', $args); print_r($data); This code is in a function.php file that gets called when a user clicks on a button inside of a flash file that is embedded on my canvas, so basically what I want it to do is, when the flash takes a screen shot and passes the variable "image" to the function, it should upload $_GET['image'] to the album. How could I go about doing this? Thanx in advance!

    Read the article

  • MSBuild file for deployment process

    - by Lee Englestone
    I could do with some pointers, code examples or references that may help me do the following in an msbuild file to help speed up the deployment process.. This scenario involves getting a developers 'local' version onto a 'development' server.. Increment a developers local Web Applications Assembly version number Publish a developers local Web Application files somewhere .rar the publsihed files or folder into the format v[IncrementedAssemblyNumber].rar Copy the .rar to somewhere Backup (.rar) the existing live website folder (located elsewhere) in the format Pre_v[IncrementedAssemblyNumber].rar Move the backed up .rar to a /Backup folder. Overwrite the development web files with the published local web files Should be simple for all those MSBUILD Gurus out there. Like I said, answers or 'Good and applicable' links would be much appreciated. Also i'm thinking of getting one of the MSbuild books. From what I can tell there are 2, possibly 3 contenders. I am not using TFS. Can anyone recommend a book for beginning MSBUILD? Ideally from people that have read more than one book on the subject. Cheers, -- Lee

    Read the article

  • How can I write a Template.pm filter to generate PNG output from LaTeX source code?

    - by Sinan Ünür
    I am looking for a way of generating PNG images of equations from LATEX source code embedded in templates. For example, given: [% FILTER latex_display ] \begin{eqnarray*} \max && U(x,y) \\ \mathrm{s.t.} && p_x x + p_y y \leq I \\ && x \geq 0, y \geq 0 \end{eqnarray*} [% END %] I would like to get the output: <div class="latex display"><img src="equation.png" width="x" height="y"></div> which should ultimately display as: I am using ttree to generate documents offline. I know about Template::Plugin::Latex but that is geared towards producing actual documents out of LATEX templates. Any suggestions?

    Read the article

  • Have I taken a wrong path in programming by being excessively worried about code elegance and style?

    - by Ygam
    I am in a major stump right now. I am a BSIT graduate, but I only started actual programming less than a year ago. I observed that I have the following attitude in programming: I tend to be more of a purist, scorning unelegant approaches to solving problems using code I tend to look at anything in a large scale, planning everything before I start coding, either in simple flowcharts or complex UML charts I have a really strong impulse on refactoring my code, even if I miss deadlines or prolong development times I am obsessed with good directory structures, file naming conventions, class, method, and variable naming conventions I tend to always want to study something new, even, as I said, at the cost of missing deadlines I tend to see software development as something to engineer, to architect; that is, seeing how things relate to each other and how blocks of code can interact (I am a huge fan of loose coupling) i.e the OOP thinking I tend to combine OOP and procedural coding whenever I see fit I want my code to execute fast (thus the elegant approaches and refactoring) This bothers me because I see my colleagues doing much better the other way around (aside from the fact that they started programming since our first year in college). By the other way around I mean, they fire up coding, gets the job done much faster because they don't have to really look at how clean their codes are or how elegant their algorithms are, they don't bother with OOP however big their projects are, they mostly use web APIs, piece them together and voila! Working code! CLients are happy, they get paid fast, at the expense of a really unmaintainable or hard-to-read code that lacks structure and conventions, or slow executions of certain actions (which the common reasoning against would be that internet connections are much faster these days, hardware is more powerful). The excuse I often receive is clients don't care about how you write the code, but they do care about how long you deliver it. If it works then all is good. Now, did my "purist" approach to programming may have been the wrong way to start programming? Should I just dump these purist concepts and just code the hell up because I have seen it: clients don't really care how beautifully coded it is?

    Read the article

  • Dynamics CRM 2013 rich text editor

    - by user2962918
    I am using Dynamics CRM 2013. How does one apply the rich text editor styling seen on the description field on the email form to other multiple line text fields? Viewing the source code it is obvious that the system is treating the rich text field very differently from the normal multiple line text fields in that instead of rendering a textarea it is rendering a table with an embedded iframe. In CRM 2011, I have used extensions that wrap up the TinyMCE editor but they were never very effective. It seems odd that I can't just check a box to do this to any text field in the settings when the behaviour is obviously built in. Thanks in advance. Richard.

    Read the article

  • SVN checkout or export for production environment?

    - by Eran Galperin
    In a project I am working on, we have an ongoing discussion amongst the dev team - should the production environment be deployed as a checkout from the SVN repository or as an export? The development environment is obviously a checkout, since it is constantly updated. For the production, I'm personally for checking out the main trunk, since it makes future updates easier (just run svn update). However some of the devs are against it, as svn creates files with the group/owner and permissions of the svn process (this is on a linux OS, so those things matter), and also having the .svn directories on the production seem to them to be somewhat dirty. Also, if it is a checkout - how do you push individual features to the production without including in-development code? do you use tags or branch out for each feature? any alternatives? EDIT: I might not have been clear - one of the requirement is to be able to constantly be able to push fixes to the production environment. We want to avoid a complete build (which takes much longer than a simple update) just for pushing critical fixes.

    Read the article

  • ptrace'ing of parent process

    - by osgx
    Hello Can child process use the ptrace system call to trace its parent? Os is linux 2.6 Thanks. upd1: I want to trace process1 from "itself". It is impossible, so I do fork and try to do ptrace(process1_pid, PTRACE_ATTACH) from child process. But I can't, there is a strange error, like kernel prohibits child from tracing their parent processes UPD2: such tracing can be prohibited by security policies. Which polices do this? Where is the checking code in the kernel? UPD3: on my embedded linux I have no errors with PEEKDATA, but not with GETREGS: child: getregs parent: -1 errno is 1, strerror is Operation not permitted errno = EPERM

    Read the article

  • Partial Classes - are they bad design?

    - by dferraro
    Hello, I'm wondering why the 'partial class' concept even exists in .NET. I'm working on an application and we are reading a (actually very good) book relavant to the development platform we are implementing at work. In the book he provides a large code base /wrapper around the platform API and explains how he developed it as he teaches different topics about the platform development. Anyway, long story short - he uses partial classes, all over the place, as a way to fake multiple inheritence in C# (IMO). Why he didnt just split the classes up into multiple ones and use composition is beyond me. He will have 3 'partial class' files to make up his base class, each w/ 3-500 lines of code... And does this several times in his API. Do you find this justifiable? If it were me, I'd have followed the S.R.P. and created multiple classes to handle different required behaviors, then create a base class that has instances of these classes as members (e.g. composition). Why did MS even put partial class into the framework?? They removed the ability to expand/collapse all code at each scope level in C# (this was allowed in C++) because it was obviously just allowing bad habits - partial class is IMO the same thing. I guess my quetion is: Can you explain to me when there would be a legitimate reason to ever use a partial class? I do not mean this to be a rant / war thread. I'm honeslty looking to learn something here. Thanks

    Read the article

  • implementing feature structures: what data type to use?

    - by Dervin Thunk
    Hello. In simplistic terms, a feature structure is an unordered list of attribute-value pairs. [number:sg, person:3 | _ ], which can be embedded: [cat:np, agr:[number:sg, person:3 | _ ] | _ ], can subindex stuff and share the value [number:[1], person:3 | _ ], where [1] is another feature structure (that is, it allows reentrancy). My question is: what data structure would people think this should be implemented with for later access to values, to perform unification between 2 fts, to "type" them, etc. There is a full book on this, but it's in lisp, which simplifies list handling. So, my choices are: a hash of lists, a list of lists, or a trie. What do people think about this?

    Read the article

  • Simulating C-style for loops in python

    - by YGA
    (even the title of this is going to cause flames, I realize) Python made the deliberate design choice to have the for loop use explicit iterables, with the benefit of considerably simplified code in most cases. However, sometimes it is quite a pain to construct an iterable if your test case and update function are complicated, and so I find myself writing the following while loops: val = START_VAL while <awkward/complicated test case>: # do stuff ... val = <awkward/complicated update> The problem with this is that the update is at the bottom of the while block, meaning that if I want to have a continue embedded somewhere in it I have to: use duplicate code for the complicated/awkard update, AND run the risk of forgetting it and having my code infinite loop I could go the route of hand-rolling a complicated iterator: def complicated_iterator(val): while <awkward/complicated test case>: yeild val val = <awkward/complicated update> for val in complicated_iterator(start_val): if <random check>: continue # no issues here # do stuff This strikes me as waaaaay too verbose and complicated. Do folks in stack overflow have a simpler suggestion?

    Read the article

  • SQL Server 2008 automated database drop, create and fill

    - by lox
    For the database in my project I have a drop/create script for the database, a script for creating tables and SPs and an Access 2003 .mdb file with some exported values. To set up the database from scratch I can use my SQL management studio to first run one script, then the other and lastly manually run the sort of tedious import task. But I would like to do this as automated as possible. Hopefully something like putting the three files in a folder along with a fourth script to execute. Looking something like: run script "dropcreate.sql" run script "createtables.sql" import "values.mdb" How is this done? I hope to avoid using SSIS and the like. The tricky this is of course the import of data, where I can't seem to find a simple way. It is also important that the files a left as they are and not embedded into anything.

    Read the article

  • samba windows gvim & "READ ERRORS"

    - by l.thee.a
    I have a samba server on my embedded box. I was planning to use it to edit files easily. However when I open files on my windows machine, I get read errors. I have tried textpad, gvim and kate(andLinux). On gvim I first get "write error in swap file" and it is replaced by "[READ ERRORS]". If I access the same file through pyNeighborhood (ubuntu) I can read/write/delete etc. Samba security is set to share and guest user is root. Also if I move the file to my ubuntu samba server, I get no problems. Any ideas? PS: I am using smbd 3.0.25b.

    Read the article

  • Trying to convert existing production database table columns from enum to VARCHAR (Rails)

    - by dchua
    Hi everyone, I have a problem that needs me to convert my existing live production (I've duplicated the schema on my local development box, don't worry :)) table column types from enums to a string. Background: Basically, a previous developer left my codebase in absolute shit, migration versions are extremely out of date, and apparently he never used it after a certain point of time in development and now that I'm tasked with migrating a rails 1.2.6 app to 2.3.5, I can't get the tests to run properly on 2.3.5 because my table columns have ENUM column types and they convert to :string, :limit = 0 on my schema.rb which creates the problem of an invalid default value when doing a rake db:test:prepare, like in the case of: Mysql::Error: Invalid default value for 'own_vehicle': CREATE TABLE `lifestyles` (`id` int(11) DEFAULT NULL auto_increment PRIMARY KEY, `member_id` int(11) DEFAULT 0 NOT NULL, `own_vehicle` varchar(0) DEFAULT 'Y' NOT NULL, `hobbies` text, `sports` text, `AStar_activities` text, `how_know_IRC` varchar(100), `IRC_referral` varchar(200), `IRC_others` varchar(100), `IRC_rdrive` varchar(30)) ENGINE=InnoDB I'm thinking of writing a migration task that looks through all the database tables for columns with enum and replace it with VARCHAR and I'm wondering if this is the right way to approach this problem. I'm also not very sure how to write it such that it would loop through my database tables and replace all ENUM colum_types with a VARCHAR. References [1] https://rails.lighthouseapp.com/projects/8994/tickets/997-dbschemadump-saves-enum-columns-as-varchar0-on-mysql [2] http://dev.rubyonrails.org/ticket/2832

    Read the article

  • Persistence unit is not persistent

    - by etam
    I need persistence unit that creates embedded database which stays persistent after closing EntityManager. This is my PU: <persistence-unit name="hello-jpa" transaction-type="RESOURCE_LOCAL"> <class>hello.jpa.User</class> <properties> <property name="hibernate.show_sql" value="true"/> <property name="hibernate.format_sql" value="true"/> <property name="hibernate.dialect" value="org.hibernate.dialect.HSQLDialect"/> <property name="hibernate.connection.driver_class" value="org.hsqldb.jdbcDriver"/> <property name="hibernate.connection.username" value="sa"/> <property name="hibernate.connection.password" value=""/> <property name="hibernate.connection.url" value="jdbc:hsqldb:target/hsql.db"/> <property name="hibernate.hbm2ddl.auto" value="update"/> </properties> </persistence-unit> And it deletes data after closing application.

    Read the article

  • Appropriate SQL Server Permissions for Developers

    - by BJ Safdie
    After a couple of Google searches and a quick look at questions here, I cannot seem to find what I thought would be a cookbook answer for SQL Server permissions. As I often see in small shops, most developers here were using an admin account for SQL Server while developing. I want to set up roles and permissions that I can assign to developers so that we can get our jobs done, but also do so with the minimum permissions required. Can anyone offer advice on what SQL Server permissions to assign? Components: SQL Server 2008 SQL Server Reporting Services (SSRS) 2008 SQL Server Integration Services (SSIS) 2008 Platforms: Production Staging/QA Development/Integration We are running "Mixed Mode" security because of some legacy apps and networks, but are moving to Windows Auth. I am not sure if that really affects the role set up. I plan to set up access for Developers to Prod and Staging/QA DBs as Read-Only. However, I still want developers to retain the ability to run Profiling. We need Deployment accounts with higher privilege levels. We are currently trying to figure out exactly what privileges we need for SSIS package deployments. Within the Development Server, Developers need broad privileges. However, I am not sure that just making them all admins is really the best choice. It's hard to believe that no one has published a decent example script that sets up these kinds of roles with a good set of appropriate permissions for developers and deployers. We can probably figure this all out by locking things down and then adding permissions as we discover the need, but that will be way too big a PITA for everyone. Can anyone point me to, or provide, a good exemplar for permissions for these kinds of roles on these kinds of platforms?

    Read the article

  • Performance of fopen vs stat

    - by Alex Marshall
    Hello, I'm writing several C programs for an embedded system where every bit of performance we can squeeze out will matter. Part of that is accessing log files. When determining if a file exists, is there any performance difference between using open / fopen, and stat ? I've been using stat on the assumption that it only has to do a quick check against the file system, whereas fopen would have to actually gain access to a file and manipulate internal data structures before returning. Is there any merit to this ?

    Read the article

  • Why does IE prompt a security warning when viewing an XML file?

    - by Tav
    Opening an XML file in Internet explorer gives a security warning. IE has a nice collapsible tree view for viewing XML, but it's disabled by default and you get this scary error message about a potential security hole. http://www.leonmeijer.nl/archive/2008/04/27/106.aspx But why? How can simply viewing an XML file (not running any embedded macros in it or anything) possibly be a security hole? Sure, I get that running XSLT could potentially do some bad stuff, but we're not talking about executing anything. We're talking about viewing. Why can't IE simply display the XML file as text (plus with the collapsible tree viewer)? So why did they label this as a security hole? Can someone describe how simply viewing an XML document could be used as an attack document?

    Read the article

  • Java Non-Blocking HTTP Server

    - by Marcus
    I have written an application using embedded Jetty that makes network calls to other services. I presume that the serving threads are idle whilst waiting for the network calls to complete. Is there any way to have a worker thread that switches between requests to perform work that can be done at the current time and then when the network calls return also handle that? A request would be returned when all work has been completed for it. I know this is a common paradigm, and I have used it for non-blocking TCP networking, but I'm unsure as to how to achieve this on a Java HTTP server whilst also waiting on external results. Any links or explanations are appreciated. Thanks

    Read the article

  • Embedding mercurial revision information in Visual Studio c# projects automatically

    - by Mark Booth
    Original Problem In building our projects, I want the mercurial id of each repository to be embedded within the product(s) of that repository (the library, application or test application). I find it makes it so much easier to debug an application ebing run by custiomers 8 timezones away if you know precisely what went into building the particular version of the application they are using. As such, every project (application or library) in our systems implement a way of getting at the associated revision information. I also find it very useful to be able to see if an application has been compiled with clean (un-modified) changesets from the repository. 'Hg id' usefully appends a + to the changeset id when there are uncommitted changes in a repository, so this allows is to easily see if people are running a clean or a modified version of the code. My current solution is detailed below, and fulfills the basic requirements, but there are a number of problems with it. Current Solution At the moment, to each and every Visual Studio solution, I add the following "Pre-build event command line" commands: cd $(ProjectDir) HgID I also add an HgID.bat file to the Project directory: @echo off type HgId.pre > HgId.cs For /F "delims=" %%a in ('hg id') Do <nul >>HgID.cs set /p = @"%%a" echo ; >> HgId.cs echo } >> HgId.cs echo } >> HgId.cs along with an HgId.pre file, which is defined as: namespace My.Namespace { /// <summary> Auto generated Mercurial ID class. </summary> internal class HgID { /// <summary> Mercurial version ID [+ is modified] [Named branch]</summary> public const string Version = When I build my application, the pre-build event is triggered on all libraries, creating a new HgId.cs file (which is not kept under revision control) and causing the library to be re-compiled with with the new 'hg id' string in 'Version'. Problems with the current solution The main problem is that since the HgId.cs is re-created at each pre-build, every time we need to compile anything, all projects in the current solution are re-compiled. Since we want to be able to easily debug into our libraries, we usually keep many libraries referenced in our main application solution. This can result in build times which are significantly longer than I would like. Ideally I would like the libraries to compile only if the contents of the HgId.cs file has actually changed, as opposed to having been re-created with exactly the same contents. The second problem with this method is it's dependence on specific behaviour of the windows shell. I've already had to modify the batch file several times, since the original worked under XP but not Vista, the next version worked under Vista but not XP and finally I managed to make it work with both. Whether it will work with Windows 7 however is anyones guess and as time goes on, I see it more likely that contractors will expect to be able to build our apps on their Windows 7 boxen. Finally, I have an aesthetic problem with this solution, batch files and bodged together template files feel like the wrong way to do this. My actual questions How would you solve/how are you solving the problem I'm trying to solve? What better options are out there than what I'm currently doing? Rejected Solutions to these problems Before I implemented the current solution, I looked at Mercurials Keyword extension, since it seemed like the obvious solution. However the more I looked at it and read peoples opinions, the more that I came to the conclusion that it wasn't the right thing to do. I also remember the problems that keyword substitution has caused me in projects at previous companies (just the thought of ever having to use Source Safe again fills me with a feeling of dread *8'). Also, I don't particularly want to have to enable Mercurial extensions to get the build to complete. I want the solution to be self contained, so that it isn't easy for the application to be accidentally compiled without the embedded version information just because an extension isn't enabled or the right helper software hasn't been installed. I also thought of writing this in a better scripting language, one where I would only write HgId.cs file if the content had actually changed, but all of the options I could think of would require my co-workers, contractors and possibly customers to have to install software they might not otherwise want (for example cygwin). Any other options people can think of would be appreciated. Update Partial solution Having played around with it for a while, I've managed to get the HgId.bat file to only overwrite the HgId.cs file if it changes: @echo off type HgId.pre > HgId.cst For /F "delims=" %%a in ('hg id') Do <nul >>HgId.cst set /p = @"%%a" echo ; >> HgId.cst echo } >> HgId.cst echo } >> HgId.cst fc HgId.cs HgId.cst >NUL if %errorlevel%==0 goto :ok copy HgId.cst HgId.cs :ok del HgId.cst Problems with this solution Even though HgId.cs is no longer being re-created every time, Visual Studio still insists on compiling everything every time. I've tried looking for solutions and tried checking "Only build startup projects and dependencies on Run" in Tools|Options|Projects and Solutions|Build and Run but it makes no difference. The second problem also remains, and now I have no way to test if it will work with Vista, since that contractor is no longer with us. If anyone can test this batch file on a Windows 7 and/or Vista box, I would appreciate hearing how it went. Finally, my aesthetic problem with this solution, is even strnger than it was before, since the batch file is more complex and this there is now more to go wrong. If you can think of any better solution, I would love to hear about them.

    Read the article

  • Super Cam iphone app how do they make it possible?

    - by Silent
    there is an iphone app called supercam and you can get it through the app store free. This app features a way to connect your webcam or dv cam that is connected on the internet, you could set up the ip address and enter the data on the app and it will connect to your online camera. the thing is that they have the video stream and it looks like they embedded the video in a uiview or webview at the bottom they have buttons to choose from all the cameras you have set up. so this is different from other video streaming apps because it does not play the video from the full screen mode (MPMediaPlayer API) would there be any tutorials about this or somehow take reverse engineer this?

    Read the article

  • Where to post code for open source usage?

    - by Douglas
    I've been working for a few weeks now with the Google Maps API v3, and have done a good bit of development for the map I've been creating. Some of the things I've done have had to be done to add usability where there previously was not any, at least not that I could find online. Essentially, I made a list of what had to be done, searched all over the web for the ways to do what I needed, and found that some were not(at the time) possible(in the "grab an example off the web" sense). Thus, in my working on this map, I have created a number of very useful tools, which I would like to share with the development community. Is there anywhere I could use as a hub, apart from my portfolio ( http://dougglover.com ), to allow people to view and recycle my work? I know how hard it can be to need to do something, and be unable to find the solution elsewhere, and I don't think that if something has been done before, it should necessarily need to be written again and again. Hence open source code, right? Firstly, I was considering coming on here and asking a question, and then just answering it. Problem there is I assume that would just look like a big reputation grab. If not, please let me know and I'll go ahead and do that so people here can see it. Other suggestions appreciated. Some stuff I've made: A (new and improved) LatLng generator Works quicker, generates LatLng based on position of a draggable marker Allows searching for an address to place the marker on/near the desired location(much better than having to scroll to your location all the way from Siberia) Since it's a draggable marker, double-clicking zooms in, instead of creating a new LatLng marker like the one I was originally using The ability to create entirely custom "Smart Paths" Plot LatLng points on the map which connect to each other just like they do using the actual Google Maps Using Dijkstra's algorithm with Javascript, the routing is intelligent and always gives the shortest possible route, using the points provided Simple, easy to read multi-dimensional array system allows for easily adding new points to the grid Any suggestions, etc. appreciated.

    Read the article

  • How can I match at the beginning of any line, including the first, with a Perl regex?

    - by JoelFan
    According the the Perl documentation on regexes: By default, the "^" character is guaranteed to match only the beginning of the string ... Embedded newlines will not be matched by "^" ... You may, however, wish to treat a string as a multi-line buffer, such that the "^" will match after any newline within the string ... you can do this by using the /m modifier on the pattern match operator. The "after any newline" part means that it will only match at the beginning of the 2nd and subsequent lines. What if I want to match at the beginning of any line (1st, 2nd, etc.)? EDIT: OK, it seems that the file has BOM information (3 chars) at the beginning and that's what's messing me up. Any way to get ^ to match anyway? EDIT: So in the end it works (as long as there's no BOM), but now it seems that the Perl documentation is wrong, since it says "after any newline"

    Read the article

< Previous Page | 760 761 762 763 764 765 766 767 768 769 770 771  | Next Page >