Search Results

Search found 74749 results on 2990 pages for 'file types'.

Page 671/2990 | < Previous Page | 667 668 669 670 671 672 673 674 675 676 677 678  | Next Page >

  • YAHOO and BING support for Index, Image and Mobile sitemaps

    - by kishore
    I know Google webmaster supports submitting Image, mobile, video and other types of sitemaps. YAHOO also mentions about mobile site map here. But does it support Image and video sitemaps. I could not find if BING supports any of these types other than XML sitemaps. Can someone please point me to any documentation on submitting Index, Image and Mobile sitemaps. Also does YAHOO and Bing support index sitemap files?

    Read the article

  • File corrupted by some tools (probably virus or antivirus)- does the pattern indicate any known corruptions?

    - by StackTrace
    As part of our software we install postgres(windows). In one of the customer sites, a set of files got corrupted. All files were part of timezone information(postgres/share/timezone). They are some sort of binary files. After the corruption, they all starts with following pattern od -tac output $ od -tac GMT 0000000 can esc etx sub nak dle em | nl em so | o r l _ 030 033 003 032 025 020 031 | \n 031 016 | o r l _ 0000020 \ \ \ \ \ \ \ del 3 fs ] del del del del del \ \ \ \ \ \ \ 377 3 034 ] 377 377 377 377 377 0000040 > ack r v s ack p soh q h r s q w h q 276 206 362 366 363 206 360 201 361 350 362 363 361 367 350 361 0000060 t r ack h eot s } v h | etx p eot ack nul } 364 362 206 350 204 363 375 366 350 374 203 360 204 206 200 375 0000100 | q t s t 8 E E E E E E E E E E 374 361 364 363 364 270 305 305 305 305 305 305 305 305 305 305 0000120 E E E E E E E E E E E E E E E E 305 305 305 305 305 305 305 305 305 305 305 305 305 305 305 305 * 0000240 m ; z dc3 7 sub c can em a u 5 can d 2 B 355 ; z 023 267 232 343 230 031 a u 5 230 d 262 302 0000260 X nul y J o S - 9 ] stx soh L can 1 ! j 330 \0 y 312 o S 255 9 335 202 001 314 030 261 241 j 0000300 dle g o etb n ff em ] 9 F ' dc4 } , em $ 020 g 357 227 n \f 231 ] 271 F 247 024 375 254 231 244 0000320 Q si ff L bs 2 # stx i 5 r % | | c del Q 017 214 314 210 2 # 002 351 5 362 245 374 374 343 177 0000340 m C esc H em enq ~ X o V p / l dc3 N sp m C 033 H 031 205 376 X o 326 360 257 l 023 N 0000360 } ) enq ( syn ! 3 s $ E z dc3 A dc3 ff P

    Read the article

  • Linux program to convert audio file of fax transmission to image?

    - by bdk
    I have a number of uncompressed audio files recorded off of an analog (POTS) telephone line of fax transmissions. Is there a Linux utility or library I could use to convert these files into images of the fax they contain? I'm not looking to send/receive a fax via a modem, but just to "replay" the communications tones and parse out the fax message.I'm guessing this may not be possible due to duplex issues and not knowing which end of the conversation is sending what,but thought I'd ask to see if anyone knew of something.

    Read the article

  • What source code organization approach helps improve modularity and API/Implementation separation?

    - by Berin Loritsch
    Few languages are as restrictive as Java with file naming standards and project structure. In that language, the file name must match the public class declared in the file, and the file must live in a directory structure matching the class package. I have mixed feelings about that approach. While I never have to guess where a file lives, there's still a lot of empty directories and artificial constraints. There's several languages that define everything about a class in one file, at least by convention. C#, Python (I think), Ruby, Erlang, etc. The commonality in most these languages is that they are object oriented, although that statement can probably be rebuffed (there is one non-OO language in the list already). Finally, there's quite a few languages mostly in the C family that have a separate header and implementation file. For C I think this makes sense, because it is one of the few ways to separate the API interface from implementations. With C it seems that feature is used to promote modularity. Yet, with C++ the way header and implementation files are split seems rather forced. You don't get the same clean API separation that you do with C, and you are forced to include some private details in the header you would rather keep only in the implementation. There's quite a few languages that have a concept that overlaps with interfaces like Java, C#, Go, etc. Some languages use what feels like a hack to provide the same concept like C# using pure virtual abstract classes. Still others don't really have an interface concept and rely on "duck" typing--for example Ruby. Ruby has modules, but those are more along the lines of mixing in behaviors to a class than they are for defining how to interact with a class. In OO terms, interfaces are a powerful way to provide separation between an API client and an API implementation. So to hurry up and ask the question, from a personal experience point of view: Does separation of header and implementation help you write more modular code, or does it get in the way? (it helps to specify the language you are referring to) Does the strict file name to class name scheme of Java help maintainability, or is it unnecessary structure for structure's sake? What would you propose to promote good API/Implementation separation and project maintenance, how would you prefer to do it?

    Read the article

  • Any Recommendations for a Web Based Large File Transfer System?

    - by Glen Richards
    I'm looking for a server software product that: Allows my users to share large files with: The general public securely to 1 or more people (notification via email, optionally with a token that gives them x period of time to download) Allows anyone in the general public to share files with my users. Perhaps by invitation. Has to be user friendly enough to allow my users to use this with out having to bug me as the admin. It needs to be a system that we can install on our own server (we don't want shared data sitting on anyone else's server) A web based solution. Using some kind or secure comms channel would be good too, eg, ssh Files to share could be over 1 GB. I found the question below. WebDav does not sound user friendly enough: http://serverfault.com/questions/86878/recommendations-for-a-secure-and-simple-dropbox-system I've done a lot of searching, but I can't get the search terms right. There are too many services that provide this, but I want something we can install on our own server. A last resort would be to roll my own. Any ideas appreciated. Glen EDIT Sorry Tom and Jeff but Glen specifically says that he's looking for a 'product' so given that I specialise in this field thought that my expertise in this area may have been of use to him. I don't see how him writing services is going to be easy for him to maintain going forward (large IT admin overhead) or simple for his users and the general public to work with.

    Read the article

  • What's the best solution for file sharing in my case? DAS or NAS?

    - by jakub
    I want to have in my network small, cheap and energy efficient server with will be fully customizable (Gnu/Linux, OpenBSD). What is more I want to have big, redundant storage in my network and access to it via server. I have already small terminal without hard drive (no SATA/PATA, one drive on USB) which works fine. I don't want to buy big server, or to use regular computer for that. It's not cheap. I thought about a small case (ITX?), and cheap computer in this with SATA ports, but I cannot find anything interesting :( I thought about NAS in network and server independently and booting server from NAS, I'm not sure which technologies will be good for that, and I don't know what with performance. Direct connection to NAS through network from workstation is next pro for that. What do you think about DAS? It will be good for that?

    Read the article

  • Help! The log file for database 'tempdb' is full. Back up the transaction log for the database to fr

    - by michael.lukatchik
    We're running SQL Server 2000. In our database, we have an "Orders" table with approximately 750,000 rows. We can perform simple SELECT statements on this table. However, when we want to run a query like SELECT TOP 100 * FROM Orders ORDER BY Date_Ordered DESC, we receive the following message: Error: 9002, Severity: 17, State: 6 The log file for database 'tempdb' is full. Back up the transaction log for the database to free up some log space. We have other tables in our database which are similar in size of the amount of records that are in the tables (i.e. 700,000 records). On these tables, we can run any queries we'd like and we never receive a message about 'tempdb being full'. To resolve this, we've backed up our database, shrunk the actual database and also shrunk the database and files in the tempdb system database, but this hasn't resolved the issue. The size of our log file is set to autogrow. We're not sure where to go next. Are there any ideas why we still might be receiving this message? Error: 9002, Severity: 17, State: 6 The log file for database 'tempdb' is full. Back up the transaction log for the database to free up some log space.

    Read the article

  • How To: Using spatial data with Entity Framework and Connector/Net

    - by GABMARTINEZ
    One of the new features introduced in Entity Framework 5.0 is the incorporation of some new types of data within an Entity Data Model: the spatial data types. These types allow us to perform operations on coordinates values in an easier way. There's no need to add stored routines or functions for every operation among these geometry types, now the user can have the alternative to put this logic on his application or keep it in the database. In the new 6.7.4 version there's also this new feature incorporated to Connector/Net library so our users can start exploring it and could provide us some feedback or comments about this new functionality. Through this tutorial on how to create a Code First Entity Model with a geometry column, we'll show an example on using Geometry types and some common operations when using geometry types inside an application. Requirements: - Connector/Net 6.7.4 - Entity Framework 5.0 version - .NET Framework 4.5 version - Basic understanding on Entity Framework and C# language. - An installed and running instance of MySQL Server 5.5.x or 5.6.10 version- Visual Studio 2012. Step One: Create a new Console Application  Inside Visual Studio select File->New Project menu option and select the Console Application template. Also make sure the .Net 4.5 version is selected so the new features for EF 5.0 will work with the application. Step Two: Add the Entity Framework Package For adding the Entity Framework Package there is more than one option: the package manager console or the Manage Nuget Packages option dialog. If you want to open the Package Manager Console, go to the Tools Menu -> Library Package Manager -> Package Manager Console. On the Package Manager Console Type:Install-Package EntityFrameworkThis will add the reference to the project of the latest released No alpha version of Entity Framework. Step Three: Adding Entity class and DBContext We'll add a simple class that represents a table entity to save some places and its location using a DBGeometry column that will be mapped to a Geometry type in MySQL. After that some operations can be performed using this data. public class MyPlace { [Key] public int Id { get; set; } public string name { get; set; } public DbGeometry location { get; set; } } public class JourneyDb : DbContext { public DbSet<MyPlace> MyPlaces { get; set; } }  Also make sure to add the connection string to the App.Config file as in the example: <?xml version="1.0" encoding="utf-8"?> <configuration>   <configSections>     <!-- For more information on Entity Framework configuration, visit http://go.microsoft.com/fwlink/?LinkID=237468 -->     <section name="entityFramework" type="System.Data.Entity.Internal.ConfigFile.EntityFrameworkSection, EntityFramework, Version=5.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" requirePermission="false" />   </configSections>   <startup>     <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.5" />   </startup>   <connectionStrings>     <add name="JourneyDb" connectionString="server=localhost;userid=root;pwd=;database=journeydb" providerName="MySql.Data.MySqlClient"/>   </connectionStrings>   <entityFramework>     </entityFramework> </configuration> Note also that the <entityFramework> section is empty.Step Four: Adding some new records.On the Program.cs file add the following code for the Main method so the Database gets created and also some new data can be added to the new table. This code adds some records containing some determinate locations. After being added a distance function will be used to know how much distance has each location in reference to the Queens Village Station in New York. static void Main(string[] args)    {     using (JourneyDb cxt = new JourneyDb())      {        cxt.Database.Delete();        cxt.Database.Create();         cxt.MyPlaces.Add(new MyPlace()        {          name = "JFK INTERNATIONAL AIRPORT OF NEW YORK",          location = DbGeometry.FromText("POINT(40.644047 -73.782291)"),        });         cxt.MyPlaces.Add(new MyPlace()        {          name = "ALLEY POND PARK",          location = DbGeometry.FromText("POINT(40.745696 -73.742638)"),        });       cxt.MyPlaces.Add(new MyPlace()        {          name = "CUNNINGHAM PARK",          location = DbGeometry.FromText("POINT(40.735031 -73.768387)"),        });         cxt.MyPlaces.Add(new MyPlace()        {          name = "QUEENS VILLAGE STATION",          location = DbGeometry.FromText("POINT(40.717957 -73.736501)"),        });         cxt.SaveChanges();         var points = (from p in cxt.MyPlaces                      select new { p.name, p.location });        foreach (var item in points)       {         Console.WriteLine("Location " + item.name + " has a distance in Km from Queens Village Station " + DbGeometry.FromText("POINT(40.717957 -73.736501)").Distance(item.location) * 100);       }       Console.ReadKey();      }  }}Output : Location JFK INTERNATIONAL AIRPORT OF NEW YORK has a distance from Queens Village Station 8.69448802402959 Km. Location ALLEY POND PARK has a distance from Queens Village Station 2.84097675104912 Km. Location CUNNINGHAM PARK has a distance from Queens Village Station 3.61695793727275 Km. Location QUEENS VILLAGE STATION has a distance from Queens Village Station 0 Km. Conclusion:Adding spatial data to a table is easier than before when having Entity Framework 5.0. This new Entity Framework feature that handles spatial data columns within the Data layer has a lot of integrated functions and methods toease this type of tasks.Notes:This version of Connector/Net is just released as GA so is preatty much stable to be used on a ProductionEnvironment. Please send us your comments or questions using this blog or at the Forums where we keep answering any questions you have about Connector/Net and MySQL Server.A copy of this sample project can be downloaded here. This application does not include any library so you will haveto add them before running it. Happly MySQL/.Net Coding.

    Read the article

  • Dovecot Virtual Users Not Authenticating

    - by blankabout
    We have a standard Postfix/Dovecot installation working perfectly with real users but cannot work out how to add virtual users, all virtual user login attempts fail with authentication errors. Following are snippets from the configuration files: /etc/postfix/main.cf: virtual_mailbox_domains = virtualexample.com virtual_mailbox_base = /var/spool/vhosts virtual_mailbox_recipients = hash:/etc/postfix/virtual_mailbox_recipients /etc/dovecot/dovecot.conf: !include conf.d/*.conf /etc/dovecot/conf.d/10-auth.conf auth_mechanisms = cram-md5 digest-md5 plain passdb { driver = passwd-file # Path for passwd-file. Also set the default password scheme. args = scheme=cram-md5 /etc/cram-md5.pwd } /etc/cram-md5.pwd [email protected]{MD5}$1$uIMvzy92$9Xt67B/qw4u6txkkxzne80 This is a snippet from the log when a login attempt is made: auth: Debug: Loading modules from directory: /usr/lib64/dovecot/auth auth: Debug: Module loaded: /usr/lib64/dovecot/auth/libauthdb_ldap.so auth: Debug: Module loaded: /usr/lib64/dovecot/auth/libdriver_sqlite.so auth: Debug: Module loaded: /usr/lib64/dovecot/auth/libmech_gssapi.so auth: Debug: passwd-file /etc/cram-md5.pwd: Read 1 users auth: Debug: auth client connected (pid=21990) auth: Debug: client in: AUTH#0111#011CRAM-MD5#011service=imap#011lip=1.1.1.1#011rip=2.2.2.2#011lport=143#011rport=51774 auth: Debug: client out: CONT#0111#011PDI1Njc0NjQ1NzQ3MTY0NTkuMTM0MTIxNzkwN0BncDM+ auth: Debug: client in: CONT auth: Debug: passwd-file([email protected],2.2.2.2): lookup: [email protected] file=/etc/cram-md5.pwd auth: Debug: client out: OK#0111#[email protected] auth: Debug: master in: REQUEST#0111630404609#01121990#0111#011b66b5f46b520a08e1d19d3d249be7073 auth: Debug: passwd([email protected],2.2.2.2): lookup auth: passwd([email protected],2.2.2.2): unknown user auth: Error: userdb([email protected],2.2.2.2): user not found from userdb passwd auth: Debug: master out: NOTFOUND#0111630404609 imap: Error: Authenticated user not found from userdb, auth lookup id=1630404609 (client-pid=21990 client-id=1) imap-login: Internal login failure (pid=21990 id=1) (auth failed, 1 attempts): user=, method=CRAM-MD5, rip=2.2.2.2, lip=1.1.1.1, mpid=21993 auth: Debug: auth client connected (pid=22010) auth: Debug: client in: AUTH#0111#011CRAM-MD5#011service=imap#011lip=1.1.1.1#011rip=2.2.2.2#011lport=143#011rport=51775 auth: Debug: client out: CONT#0111#011PDcxMDkwNDY1NTQzODUzMDkuMTM0MTIxNzkyOEBncDM+ auth: Debug: client in: CONT auth: Debug: passwd-file([email protected],2.2.2.2): lookup: [email protected] file=/etc/cram-md5.pwd auth: Debug: client out: OK#0111#[email protected] auth: Debug: master in: REQUEST#011343539713#01122010#0111#011e47b1345784e2845d59e794afa9a6bbe auth: Debug: passwd([email protected],2.2.2.2): lookup auth: passwd([email protected],2.2.2.2): unknown user auth: Error: userdb([email protected],2.2.2.2): user not found from userdb passwd auth: Debug: master out: NOTFOUND#011343539713 imap: Error: Authenticated user not found from userdb, auth lookup id=343539713 (client-pid=22010 client-id=1) imap-login: Internal login failure (pid=22010 id=1) (auth failed, 1 attempts): user=, method=CRAM-MD5, rip=2.2.2.2, lip=1.1.1.1, mpid=22011 It would appear that the user lookup is not working, even tho' the log suggests that Dovecot is using the /etc/cram-md5.pwd file and the user is configured in that same file. There are of course dozens of examples of using virtual users with Dovecot, but all the ones we have found either refer to Dovecot 1.x (we are using 2.x), using only virtual users (we must use real AND virtual users) or want to use a MySQL db, we need to use a text file. Some hints about where we are going wrong would be very much appreciated.

    Read the article

  • I see no LOBs!

    - by Paul White
    Is it possible to see LOB (large object) logical reads from STATISTICS IO output on a table with no LOB columns? I was asked this question today by someone who had spent a good fraction of their afternoon trying to work out why this was occurring – even going so far as to re-run DBCC CHECKDB to see if any corruption had taken place.  The table in question wasn’t particularly pretty – it had grown somewhat organically over time, with new columns being added every so often as the need arose.  Nevertheless, it remained a simple structure with no LOB columns – no TEXT or IMAGE, no XML, no MAX types – nothing aside from ordinary INT, MONEY, VARCHAR, and DATETIME types.  To add to the air of mystery, not every query that ran against the table would report LOB logical reads – just sometimes – but when it did, the query often took much longer to execute. Ok, enough of the pre-amble.  I can’t reproduce the exact structure here, but the following script creates a table that will serve to demonstrate the effect: IF OBJECT_ID(N'dbo.Test', N'U') IS NOT NULL DROP TABLE dbo.Test GO CREATE TABLE dbo.Test ( row_id NUMERIC IDENTITY NOT NULL,   col01 NVARCHAR(450) NOT NULL, col02 NVARCHAR(450) NOT NULL, col03 NVARCHAR(450) NOT NULL, col04 NVARCHAR(450) NOT NULL, col05 NVARCHAR(450) NOT NULL, col06 NVARCHAR(450) NOT NULL, col07 NVARCHAR(450) NOT NULL, col08 NVARCHAR(450) NOT NULL, col09 NVARCHAR(450) NOT NULL, col10 NVARCHAR(450) NOT NULL, CONSTRAINT [PK dbo.Test row_id] PRIMARY KEY CLUSTERED (row_id) ) ; The next script loads the ten variable-length character columns with one-character strings in the first row, two-character strings in the second row, and so on down to the 450th row: WITH Numbers AS ( -- Generates numbers 1 - 450 inclusive SELECT TOP (450) n = ROW_NUMBER() OVER (ORDER BY (SELECT 0)) FROM master.sys.columns C1, master.sys.columns C2, master.sys.columns C3 ORDER BY n ASC ) INSERT dbo.Test WITH (TABLOCKX) SELECT REPLICATE(N'A', N.n), REPLICATE(N'B', N.n), REPLICATE(N'C', N.n), REPLICATE(N'D', N.n), REPLICATE(N'E', N.n), REPLICATE(N'F', N.n), REPLICATE(N'G', N.n), REPLICATE(N'H', N.n), REPLICATE(N'I', N.n), REPLICATE(N'J', N.n) FROM Numbers AS N ORDER BY N.n ASC ; Once those two scripts have run, the table contains 450 rows and 10 columns of data like this: Most of the time, when we query data from this table, we don’t see any LOB logical reads, for example: -- Find the maximum length of the data in -- column 5 for a range of rows SELECT result = MAX(DATALENGTH(T.col05)) FROM dbo.Test AS T WHERE row_id BETWEEN 50 AND 100 ; But with a different query… -- Read all the data in column 1 SELECT result = MAX(DATALENGTH(T.col01)) FROM dbo.Test AS T ; …suddenly we have 49 LOB logical reads, as well as the ‘normal’ logical reads we would expect. The Explanation If we had tried to create this table in SQL Server 2000, we would have received a warning message to say that future INSERT or UPDATE operations on the table might fail if the resulting row exceeded the in-row storage limit of 8060 bytes.  If we needed to store more data than would fit in an 8060 byte row (including internal overhead) we had to use a LOB column – TEXT, NTEXT, or IMAGE.  These special data types store the large data values in a separate structure, with just a small pointer left in the original row. Row Overflow SQL Server 2005 introduced a feature called row overflow, which allows one or more variable-length columns in a row to move to off-row storage if the data in a particular row would otherwise exceed 8060 bytes.  You no longer receive a warning when creating (or altering) a table that might need more than 8060 bytes of in-row storage; if SQL Server finds that it can no longer fit a variable-length column in a particular row, it will silently move one or more of these columns off the row into a separate allocation unit. Only variable-length columns can be moved in this way (for example the (N)VARCHAR, VARBINARY, and SQL_VARIANT types).  Fixed-length columns (like INTEGER and DATETIME for example) never move into ‘row overflow’ storage.  The decision to move a column off-row is done on a row-by-row basis – so data in a particular column might be stored in-row for some table records, and off-row for others. In general, if SQL Server finds that it needs to move a column into row-overflow storage, it moves the largest variable-length column record for that row.  Note that in the case of an UPDATE statement that results in the 8060 byte limit being exceeded, it might not be the column that grew that is moved! Sneaky LOBs Anyway, that’s all very interesting but I don’t want to get too carried away with the intricacies of row-overflow storage internals.  The point is that it is now possible to define a table with non-LOB columns that will silently exceed the old row-size limit and result in ordinary variable-length columns being moved to off-row storage.  Adding new columns to a table, expanding an existing column definition, or simply storing more data in a column than you used to – all these things can result in one or more variable-length columns being moved off the row. Note that row-overflow storage is logically quite different from old-style LOB and new-style MAX data type storage – individual variable-length columns are still limited to 8000 bytes each – you can just have more of them now.  Having said that, the physical mechanisms involved are very similar to full LOB storage – a column moved to row-overflow leaves a 24-byte pointer record in the row, and the ‘separate storage’ I have been talking about is structured very similarly to both old-style LOBs and new-style MAX types.  The disadvantages are also the same: when SQL Server needs a row-overflow column value it needs to follow the in-row pointer a navigate another chain of pages, just like retrieving a traditional LOB. And Finally… In the example script presented above, the rows with row_id values from 402 to 450 inclusive all exceed the total in-row storage limit of 8060 bytes.  A SELECT that references a column in one of those rows that has moved to off-row storage will incur one or more lob logical reads as the storage engine locates the data.  The results on your system might vary slightly depending on your settings, of course; but in my tests only column 1 in rows 402-450 moved off-row.  You might like to play around with the script – updating columns, changing data type lengths, and so on – to see the effect on lob logical reads and which columns get moved when.  You might even see row-overflow columns moving back in-row if they are updated to be smaller (hint: reduce the size of a column entry by at least 1000 bytes if you hope to see this). Be aware that SQL Server will not warn you when it moves ‘ordinary’ variable-length columns into overflow storage, and it can have dramatic effects on performance.  It makes more sense than ever to choose column data types sensibly.  If you make every column a VARCHAR(8000) or NVARCHAR(4000), and someone stores data that results in a row needing more than 8060 bytes, SQL Server might turn some of your column data into pseudo-LOBs – all without saying a word. Finally, some people make a distinction between ordinary LOBs (those that can hold up to 2GB of data) and the LOB-like structures created by row-overflow (where columns are still limited to 8000 bytes) by referring to row-overflow LOBs as SLOBs.  I find that quite appealing, but the ‘S’ stands for ‘small’, which makes expanding the whole acronym a little daft-sounding…small large objects anyone? © Paul White 2011 email: [email protected] twitter: @SQL_Kiwi

    Read the article

  • How do I make Nginx redirect all requests for files which do not exist to a single php file?

    - by Richard
    I have the following nginx vhost config: server { listen 80 default_server; access_log /path/to/site/dir/logs/access.log; error_log /path/to/site/dir/logs/error.log; root /path/to/site/dir/webroot; index index.php index.html; try_files $uri /index.php; location ~ \.php$ { if (!-f $request_filename) { return 404; } fastcgi_pass localhost:9000; fastcgi_param SCRIPT_FILENAME /path/to/site/dir/webroot$fastcgi_script_name; include /path/to/nginx/conf/fastcgi_params; } } I want to redirect all requests that don't match files which exist to index.php. This works fine for most URIs at the moment, for example: example.com/asd example.com/asd/123/1.txt Neither of asd or asd/123/1.txt exist so they get redirected to index.php and that works fine. However, if I put in the url example.com/asd.php, it tries to look for asd.php and when it can't find it, it returns 404 instead of sending the request to index.php. Is there a way to get asd.php to be also sent to index.php if asd.php doesn't exist?

    Read the article

  • How can I change a video container without re-encoding or compressing the file?

    - by GiH
    When I ripped my Kill Bill DVD I used handbrake and put it into a single avi. I realize that I didn't get the subtitles, so what I want to do is convert the AVI to MKV and put the subtitles in the mkv. How do I go about doing this without losing any qualityI don't care about compressing or anything ju? I don't care about compressing or anything, just want to change the container. If handbrake can do it, I'd prefer to use that since I already have it.

    Read the article

  • Different file locations for http v https on IIS?

    - by Jeremy Morgan
    We have a server running IIS and have some folders running under https, but most are open. The problem I'm having is when someone is directed from a page in the secure section of the site, the relative link brings up https. For example: link to /pictures goes to http://www.mysite.com/pictures But if someone is on a secured part of the site https://www.mysite.com/shoppingcart And then clicks back to /pictures, they get https://www.mysite.com/pictures so the pictures directory is shown under https. My problem is, they get a 404 not found message when this happens. I could not find anything in the settings that would indicate that secured connections are pulling files from anywhere different than non-secured. If I type http or https on the main page of the site both come up fine. But if I try to add the https:// in a folder level, I get a 404. Any ideas why this might be happening?

    Read the article

  • How to Modify a Signature for Use in Plain Text Emails in Outlook 2013

    - by Lori Kaufman
    If you’ve created a signature with an image, links, text formatting, or special characters, the signature will not look the same in plain text formatted emails as it does in HTML format. As the name suggests, Plain Text does not support any type of formatting. For example, if you include an image in your signature, as shown below, the plain text version will be blank. Active links in HTML signatures will be converted to just the text of the link in plain text emails. The How-To Geek link in the image below will become simply How-To Geek and will look like the rest of the text in the signature. The same thing is true in the following example. The active links are stripped from the text. The picture of the envelope that was inserted using the Wingdings font will only display as the plain text character associated with it. There are times you may need to send email in Plain Text format, but still include your signature. You can edit the plain text version of your signature to make it look good in plain text emails by manually editing the text file. To do this, click the File tab. Click Options in the menu list on the left side of the Account Information screen. On the Outlook Options dialog box, click Mail in the list of options on the left side of the dialog box. In the Compose messages section, press and hold the Ctrl key and click the Signatures button. This opens the Signatures folder containing the files used to insert signatures into emails. The .txt file version of each signature is used when inserting a signature into a plain text email. Double-click on a .txt file for the signature you want to edit to open it in Notepad, or your default text editor. Notice that the links on “How-To Geek” and “Email me” are gone and the envelope typed using the Wingdings font was converted to an “H.” Edit the text file to remove extra characters, replace images, and provide full web and email links. Save the text file. Create a new mail message and select the edited signature, if it’s not the default signature for the current email account. To convert the email to plain text, click the Format Text tab and click Plain Text in the Format section. The Microsoft Outlook Compatibility Checker displays telling you that Formatted text will become plain text. Click Continue. The HTML version of your signature is converted to the plain text version. NOTE: You should make a backup of the .txt signature file you edited, as this file will change again when you change your signature in the Signature Editor.     

    Read the article

  • How to convert lots of database file from MSSQL 2000 to MSSQL 2005?

    - by Tech
    Hi all, I am moving the SQL Server from MSSQL 2000 to MSSQL 2005, and I found the article in the web like this: http://www.aspfree.com/c/a/MS-SQL-Server/Moving-Data-from-SQL-Server-2000-to-SQL-Server-2005/ It works, but the problem is, it only move database one by one. Because I have so many database, is there any easy way to do so? or is there provides any batches / untitlty allow me to do so? thz u.

    Read the article

  • OSB 11g & SAP – Single Channel/Program ID for Multiple IDOCs

    - by Shub Lahiri, A-Team
    Background This note is a supplement to the blog entry, SOA 11g & SAP – Single Channel/Program ID for Multiple IDOCs by Greg Mally. Greg has shown how a single SOA Suite composite can be used with iWay Adapters to receive multiple IDOC types via a single channel in the adapter, corresponding to a single programID on the SAP system. We will try to address the same requirements within the OSB framework here. Project Built - Design Time The basic build of an OSB project with iWay SAP Adapter, as seen in another entry in this blog, consists of working in OSB Design console and Application Explorer. OSB Design Time - Part 1 We will create a placeholder project first in OSB with a proper directory structure, so that we can export the WSDL, XSD and the JCA binding information from Application Explorer directly into this project. Application Explorer - iWay Design Time Tool Receiving IDOCs is classified as an inbound event within Application Explorer. For setting up events, a channel is first defined (e.g. iDoc_Channel) using the same PROGRAMID (RFC destination), as defined within SAP for the OSB server. Next, the same channel is used to export the JCA Inbound Event artifacts for the candidate IDOC, e.g. DEBMAS06 directly to the pre-created OSB project. Note that the validation for schema has been turned off. As a result, this will allow the adapter, at runtime, to use a single channel to receive multiple IDOC types from SAP and pass them on to the OSB runtime engine without any validation. In other words, we do not have to repeat the above step for each IDOC type. OSB Design Time - Part 2 Create 2 simple XML based Business Services to write to a file, e.g.  SAP_DEBMAS_File and SAP_MATMAS_File. Next, generate a Proxy Service using the JCA binding file exported from Application Explorer in the previous section. In the generated proxy service, edit the message flow and add a route node. Add a routing table in the route node with the following routing function. fn:local-name-from-QName(fn:node-name($body/*[1])) This function takes advantage of the fact that the XML payload at runtime, after translation by adapter, has the IDOC type as the top element. With the routing function in place, build the routing table to add 2 branches to route the IDOCs to the appropriate Business Service for writing the XML payload to files in separate directories. This completes the build of the OSB project. Testing - Run-Time After deployment and activation, the SAP adapter will wait to receive multiple types of IDOCs sent from the SAP system using a single channel. Upon receipt of the IDOCs, the OSB project will route them appropriately to save the corresponding XML payloads for different IDOC types in different directories.

    Read the article

  • YAHOO and BING support for Index, Image and Mobile sitemaps

    - by kishore
    I know Google webmaster supports submitting Image, mobile, video and other types of sitemaps. YAHOO also mentions about mobile site map here. But does it support Image and video sitemaps. I could not find if BING supports any of these types other than XML sitemaps. Can someone please point me to any documentation on submitting Index, Image and Mobile sitemaps. Also does YAHOO and Bing support index sitemap files?

    Read the article

< Previous Page | 667 668 669 670 671 672 673 674 675 676 677 678  | Next Page >