Search Results

Search found 27396 results on 1096 pages for 'mysql query'.

Page 612/1096 | < Previous Page | 608 609 610 611 612 613 614 615 616 617 618 619  | Next Page >

  • Java unchecked method invocation

    - by Sam
    I'm trying to setup a multithreaded application using SQLite4java, and everything is working fine. However, according to the getting started tutorial I am meant to create an object of type "object" and in order to return a value of null (due to use of generic types). Here is the suggested code: queue.execute(new SQLiteJob<Object>() { protected Object job(SQLiteConnection connection) throws SQLiteException { // this method is called from database thread and passed the connection connection.exec(...); return null; } }); Source The following example code I created produces the same error: error: test.java:9: warning: [unchecked] unchecked method invocation: <T,J>execute(J) in com.almworks.sqlite4java.SQLiteQueue is applied to (query<java.lang.Integer>) queue.execute(new query<Integer>()); test.java: import com.almworks.sqlite4java.*; import java.util.ArrayList; import java.io.File; class test{ public static void main(String[] args){ File f = new File("file.db"); SQLiteQueue queue = new SQLiteQueue(f); queue.execute(new query<Integer>()); } } query.java: import com.almworks.sqlite4java.SQLiteException; import com.almworks.sqlite4java.SQLiteJob; import com.almworks.sqlite4java.SQLiteConnection; import com.almworks.sqlite4java.SQLiteStatement; import java.util.ArrayList; class query<T> extends SQLiteJob{ protected ArrayList<Integer> job(SQLiteConnection connection) throws SQLiteException{ ArrayList<Integer> ints = new ArrayList<Integer>(); //DB Stuff return ints; } } I have read a lot about how this particular message appears when people fail to specify a type for an ArrayList. However, I am not attempting to cast the object or do anything with it. It is merely a mechanism implemented by the library developers in order to return a null. I do not believe this to be an issue relating directly to the library, which is why I'm asking this on StackOverflow. I believe it all comes down to my lack of experience with generic types. I've already spent a few hours on this and don't feel like I am getting anywhere. How do I stop the warning?

    Read the article

  • How do I read Unicode characters from an MS Access 2007 database through Java?

    - by Peter
    In Java, I have written a program that reads a UTF8 text file. The text file contains a SQL query of the SELECT kind. The program then executes the query on the Microsoft Access 2007 database and writes all fields of the first row to a UTF8 text file. The problem I have is when a row is returned that contains unicode characters, such as "?". These characters show up as "?" in the text file. I know that the text files are read and written correctly, because a dummy UTF8 character ("?") is read from the text file containing the SQL query and written to the text file containing the resulting row. The UTF8 character looks correct when the written text file is opened in Notepad, so the reading and writing of the text files are not part of the problem. This is how I connect to the database and how I execute the SQL query: ---- START CODE Connection c = DriverManager.getConnection("jdbc:odbc:Driver={Microsoft Access Driver (*.mdb, *.accdb)};DBQ=C:/database.accdb;Pwd=temp"); ResultSet r = c.createStatement().executeQuery(sql); ---- END CODE I have tried making a charSet property to the Connection but it makes no difference: ---- START CODE Properties p = new Properties(); p.put("charSet", "utf-8"); p.put("lc_ctype", "utf-8"); p.put("encoding", "utf-8"); Connection c = DriverManager.getConnection("...", p); ---- END CODE Tried with "utf8"/"UTF8"/"UTF-8", no difference. If I enter "UTF-16" I get the following exception: "java.lang.IllegalArgumentException: Illegal replacement". Been searching around for hours with no results and now turn my hope to you. Please help! I also accept workaround suggestions. =) What I want to be able to do is to make a Unicode query (for example one that searches for posts that contain the "?" character) and to have results with Unicode characters receieved and saved correctly. Thank you!

    Read the article

  • One-to-many relationship with JDO in Google App Engine

    - by Marvin
    I've followed the GAE docs on setting up one-to-many relationship in JDO but I'm still having trouble in retrieving the collection data back. I have no problem getting the other non-collection fields back. Here are my classes: @PersistenceCapable public class User{ @PrimaryKey @Persistent(valueStrategy = IdGeneratorStrategy.IDENTITY) private Key key; @Persistent private String uniqueId; @Persistent private String email; @Persistent private List<Address> addresses = new ArrayList<Address>() ; ... } @PersistenceCapable public class Phone{ @PrimaryKey @Persistent(valueStrategy = IdGeneratorStrategy.IDENTITY) private Key key; @Persistent private String number; ... } public class UserDaoImpl implements UserDao { public void insertUser(User user) { if(user.getKey() == null) { com.google.appengine.api.datastore.Key key = KeyFactory.createKey(User.class.getSimpleName(), user.getEmail()); user.setKey(key); } PersistenceManager pm = PersistenceManagerWrapper.getPersistenceManager(); notNull(user); try { pm.makePersistent(user); } finally { pm.close(); } } @SuppressWarnings("unchecked") public User getUser(String uniqueId) { PersistenceManager pm = PersistenceManagerWrapper.getPersistenceManager(); Query query = pm.newQuery(User.class); query.setFilter("uniqueId == uniqueIdParam"); query.declareParameters("String uniqueIdParam"); User user = null; try { List<User> users = (List<User>)(query.execute(uniqueId)); //TODO abstract this if(users.size() > 0) user = users.get(0); } finally { pm.close(); } return user; } } public class UserDaoImplTest { @Test public void getUserTest() { User user = createTestUser(); assertNotNull("The user object should not be null", user); userDao.insertUser(user); User returnedUser = userDao.getUser(TEST_USER_ID); assertNotNull("The returnedUser object should not be null", returnedUser); Assert.assertPropertyEqualsExcludeProperties("User Object", user, returnedUser, ""); } } When I run the test, all the properties for User is populated but the list of Phone if I get is empty.

    Read the article

  • Parallel.For maintain input list order on output list

    - by romeozor
    I'd like some input on keeping the order of a list during heavy-duty operations that I decided to try to do in a parallel manner to see if it boosts performance. (It did!) I came up with a solution, but since this was my first attempt at anything parallel, I'd need someone to slap my hands if I did something very stupid. There's a query that returns a list of card owners, sorted by name, then by date of birth. This needs to be rendered in a table on a web page (ASP.Net WebForms). The original coder decided he would construct the table cell-by-cell (TableCell), add them to rows (TableRow), then each row to the table. So no GridView, allegedly its performance is bad, but the performance was very poor regardless :). The database query returns in no time, the most time is spent on looping through the results and adding table cells etc. I made the following method to maintain the original order of the list: private TableRow[] ComposeRows(List<CardHolder> queryResult) { int queryElementsCount = queryResult.Count(); // array with the query's size var rowArray = new TableRow[queryElementsCount]; Parallel.For(0, queryElementsCount, i => { var row = new TableRow(); var cell = new TableCell(); // various operations, including simple ones such as: cell.Text = queryResult[i].Name; row.Cells.Add(cell); // here I'm adding the current item to it's original index // to maintain order in the output list rowArray[i] = row; }); return rowArray; } So as you can see, because I'm returning a very different type of data (List<CardHolder> -> TableRow[]), I can't just simply omit the ordering from the original query to do it after the operations. Also, I also thought it would be a good idea to Dispose() the objects at the end of each loop, because the query can return a huge list and letting cell and row objects pile up in the heap could impact performance.(?) How badly did I do? Does anyone have a better solution in case mine is flawed?

    Read the article

  • Expression Too Complex In Access 2007

    - by Jazzepi
    When I try to run this query in Access through the ODBC interface into a MySQL database I get an "Expression too complex in query expression" error. The essential thing I'm trying to do is translate abbreviated names of languages into their full body English counterparts. I was curious if there was some way to "trick" access into thinking the expression is smaller with sub queries, or if someone else had a better idea of how to solve this problem. I thought about making a temporary table and doing a join on it, but that's not supported in Access SQL. Just as an FYI, the query worked fine until I added the big long IFF chain. I tested the query on a smaller IFF chain for three languages, and that wasn't an issue, so the problem definitely stems from the huge IFF chain (It's 26 deep). Also, I might be able to drop some of the options (like combining the different forms of Chinese or Portuguese) As a test, I was able to get the SQL query to work after paring it down to 14 IFF() statements, but that's a far cry from the 26 languages I'd like to represent. SELECT TOP 5 Count( * ) AS [Number of visits by language], IIf(login.lang="ar","Arabic",IIf(login.lang="bg","Bulgarian",IIf(login.lang="zh_CN","Chinese (Simplified Han)",IIf(login.lang="zh_TW","Chinese (Traditional Han)",IIf(login.lang="cs","Czech",IIf(login.lang="da","Danish",IIf(login.lang="de","German",IIf(login.lang="en_US","United States English",IIf(login.lang="en_GB","British English",IIf(login.lang="es","Spanish",IIf(login.lang="fr","French",IIf(login.lang="el","Greek",IIf(login.lang="it","Italian",IIf(login.lang="ko","Korean",IIf(login.lang="hu","Hungarian",IIf(login.lang="nl","Dutch",IIf(login.lang="pl","Polish",IIf(login.lang="pt_PT","European Portuguese",IIf(login.lang="pt_BR","Brazilian Portuguese",IIf(login.lang="ru","Russian",IIf(login.lang="sk","Slovak",IIf(login.lang="sl","Slovenian","IIf(login.lang="fi","Finnish",IIf(login.lang="sv","Swedish",IIf(login.lang="tr","Turkish","Unknown")))))))))))))))))))))))))) AS [Language] FROM login, reservations, reservation_users, schedules WHERE (reservations.start_date Between DATEDIFF('s','1970-01-01 00:00:00',[Starting Date in the Following Format YYYY/MM/DD]) And DATEDIFF('s','1970-01-01 00:00:00',[Ending Date in the Following Format YYYY/MM/DD])) And reservations.is_blackout=0 And reservation_users.memberid=login.memberid And reservation_users.resid=reservations.resid And reservation_users.invited=0 And reservations.scheduleid=schedules.scheduleid And scheduletitle=[Schedule Title] GROUP BY login.lang ORDER BY Count( * ) DESC; @ Michael Todd I completely agree. The list of languages should have been a table in the database and the login.lang should have been a FK into that table. Unfortunately this isn't how the database was written, and it's not really mine to modify. The languages are placed into the login.lang field by the PHP running on top of the database.

    Read the article

  • Symfony : ajax call cause server to queue next queries

    - by Remiz
    Hello, I've a problem with my application when an ajax call on the server takes too much time : it queue all the others queries from the user until it's done server side (I realized that canceling the call client side has no effect and the user still have to wait). Here is my test case : <script type="text/javascript" src="jquery-1.4.1.min.js"></script> <a href="another-page.php">Go to another page on the same server</a> <script type="text/javascript"> url = 'http://localserver/some-very-long-complex-query'; $.get(url); </script> So when the get is fired and then after I click on the link, the server finish serving the first call before bringing me to the other page. My problem is that I want to avoid this behavior. I'm on a LAMP server and I'm looking in a way to inform the server that the user aborted the query with function like connection_aborted(), do you think that's the way to go ? Also, I know that the longest part of this PHP script is a MySQL query, so even if I know that connection_aborted() can detect that the user cancel the call, I still need to check this during the MySQL query... I'm not really sure that PHP can handle this kind of "event". So if you have any better idea, I can't wait to hear it. Thank you. Update : After further investigation, I found that the problem happen only with the Symfony framework (that I omitted to precise, my bad). It seems that an Ajax call lock any other future call. It maybe related to the controller or the routing system, I'm looking into it. Also for those interested by the problem here is my new test case : -new project with Symfony 1.4.3, default configuration, I just created an app and a default module. -jquery 1.4 for the ajax query. Here is my actions.class.php (in my unique module) : class defaultActions extends sfActions { public function executeIndex(sfWebRequest $request) { //Do nothing } public function executeNewpage() { //Do also nothing } public function executeWaitingaction(){ // Wait sleep(30); return false; } } Here is my indexSuccess.php template file : <script type="text/javascript" src="jquery-1.4.1.min.js"></script> <a href="<?php echo url_for('default/newpage');?>">Go to another symfony action</a> <script type="text/javascript"> url = '<?php echo url_for('default/waitingaction');?>'; $.get(url); </script> For the new page template, it's not very relevant... But with this, I'm able to reproduce the lock problem I've on my real application. Is somebody else having the same issue ? Thanks.

    Read the article

  • Normalizing Item Names & Synonyms

    - by RabidFire
    Consider an e-commerce application with multiple stores. Each store owner can edit the item catalog of his store. My current database schema is as follows: item_names: id | name | description | picture | common(BOOL) items: id | item_name_id | picture | price | description | picture item_synonyms: id | item_name_id | name | error(BOOL) Notes: error indicates a wrong spelling (eg. "Ericson"). description and picture of the item_names table are "globals" that can optionally be overridden by "local" description and picture fields of the items table (in case the store owner wants to supply a different picture for an item). common helps separate unique item names ("Jimmy Joe's Cheese Pizza" from "Cheese Pizza") I think the bright side of this schema is: Optimized searching & Handling Synonyms: I can query the item_names & item_synonyms tables using name LIKE %QUERY% and obtain the list of item_name_ids that need to be joined with the items table. (Examples of synonyms: "Sony Ericsson", "Sony Ericson", "X10", "X 10") Autocompletion: Again, a simple query to the item_names table. I can avoid the usage of DISTINCT and it minimizes number of variations ("Sony Ericsson Xperia™ X10", "Sony Ericsson - Xperia X10", "Xperia X10, Sony Ericsson") The down side would be: Overhead: When inserting an item, I query item_names to see if this name already exists. If not, I create a new entry. When deleting an item, I count the number of entries with the same name. If this is the only item with that name, I delete the entry from the item_names table (just to keep things clean; accounts for possible erroneous submissions). And updating is the combination of both. Weird Item Names: Store owners sometimes use sentences like "Harry Potter 1, 2 Books + CDs + Magic Hat". There's something off about having so much overhead to accommodate cases like this. This would perhaps be the prime reason I'm tempted to go for a schema like this: items: id | name | picture | price | description | picture (... with item_names and item_synonyms as utility tables that I could query) Is there a better schema you would suggested? Should item names be normalized for autocomplete? Is this probably what Facebook does for "School", "City" entries? Is the first schema or the second better/optimal for search? Thanks in advance! References: (1) Is normalizing a person's name going too far?, (2) Avoiding DISTINCT

    Read the article

  • Intermittent "Specified cast is invalid" with StructureMap injected data context

    - by FreshCode
    I am intermittently getting an System.InvalidCastException: Specified cast is not valid. error in my repository layer when performing an abstracted SELECT query mapped with LINQ. The error can't be caused by a mismatched database schema since it works intermittently and it's on my local dev machine. Could it be because StructureMap is caching the data context between page requests? If so, how do I tell StructureMap v2.6.1 to inject a new data context argument into my repository for each request? Update: I found this question which correlates my hunch that something was being re-used. Looks like I need to call Dispose on my injected data context. Not sure how I'm going to do this to all my repositories without copypasting a lot of code. Edit: These errors are popping up all over the place whenever I refresh my local machine too quickly. Doesn't look like it's happening on my remote deployment box, but I can't be sure. I changed all my repositories' StructureMap life cycles to HttpContextScoped() and the error persists. Code: public ActionResult Index() { // error happens here, which queries my page repository var page = _branchService.GetPage("welcome"); if (page != null) ViewData["Welcome"] = page.Body; ... } Repository: GetPage boils down to a filtered query mapping in my page repository. public IQueryable<Page> GetPages() { var pages = from p in _db.Pages let categories = GetPageCategories(p.PageId) let revisions = GetRevisions(p.PageId) select new Page { ID = p.PageId, UserID = p.UserId, Slug = p.Slug, Title = p.Title, Description = p.Description, Body = p.Text, Date = p.Date, IsPublished = p.IsPublished, Categories = new LazyList<Category>(categories), Revisions = new LazyList<PageRevision>(revisions) }; return pages; } where _db is an injected data context as an argument, stored in a private variable which I reuse for SELECT queries. Error: Specified cast is not valid. Exception Details: System.InvalidCastException: Specified cast is not valid. Stack Trace: [InvalidCastException: Specified cast is not valid.] System.Data.Linq.SqlClient.SqlProvider.Execute(Expression query, QueryInfo queryInfo, IObjectReaderFactory factory, Object[] parentArgs, Object[] userArgs, ICompiledSubQuery[] subQueries, Object lastResult) +4539 System.Data.Linq.SqlClient.SqlProvider.ExecuteAll(Expression query, QueryInfo[] queryInfos, IObjectReaderFactory factory, Object[] userArguments, ICompiledSubQuery[] subQueries) +207 System.Data.Linq.SqlClient.SqlProvider.System.Data.Linq.Provider.IProvider.Execute(Expression query) +500 System.Data.Linq.DataQuery`1.System.Linq.IQueryProvider.Execute(Expression expression) +50 System.Linq.Queryable.FirstOrDefault(IQueryable`1 source) +383 Manager.Controllers.SiteController.Index() in C:\Projects\Manager\Manager\Controllers\SiteController.cs:68 lambda_method(Closure , ControllerBase , Object[] ) +79 System.Web.Mvc.ReflectedActionDescriptor.Execute(ControllerContext controllerContext, IDictionary`2 parameters) +258 System.Web.Mvc.ControllerActionInvoker.InvokeActionMethod(ControllerContext controllerContext, ActionDescriptor actionDescriptor, IDictionary`2 parameters) +39 System.Web.Mvc.<>c__DisplayClassd.<InvokeActionMethodWithFilters>b__a() +125 System.Web.Mvc.ControllerActionInvoker.InvokeActionMethodFilter(IActionFilter filter, ActionExecutingContext preContext, Func`1 continuation) +640 System.Web.Mvc.ControllerActionInvoker.InvokeActionMethodWithFilters(ControllerContext controllerContext, IList`1 filters, ActionDescriptor actionDescriptor, IDictionary`2 parameters) +312 System.Web.Mvc.ControllerActionInvoker.InvokeAction(ControllerContext controllerContext, String actionName) +709 System.Web.Mvc.Controller.ExecuteCore() +162 System.Web.Mvc.<>c__DisplayClass8.<BeginProcessRequest>b__4() +58 System.Web.Mvc.Async.<>c__DisplayClass1.<MakeVoidDelegate>b__0() +20 System.Web.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() +453 System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) +371

    Read the article

  • What does Apache need to support both mysqli and PDO?

    - by Nathan Long
    I'm considering changing some PHP code to use PDO for database access instead of mysqli (because the PDO syntax makes more sense to me and is database-agnostic). To do that, I'd need both methods to work while I'm making the changeover. My problem is this: so far, either one or the other method will crash Apache. Right now I'm using XAMPP in Windows XP, and PHP Version 5.2.8. Mysqli works fine, and so does this: $dbc = new PDO("mysql:host=$hostname;dbname=$dbname", $username, $password); echo 'Connected to database'; $sql = "SELECT * FROM `employee`"; But this line makes Apache crash: $dbc->query($sql); I don't want to redo my entire Apache or XAMPP installation, but I'd like for PDO to work. So I tried updating libmysql.dll from here, as oddvibes recommended here. That made my simple PDO query work, but then mysqli queries crashed Apache. (I also tried the suggestion after that one, to update php_pdo_mysql.dll and php_pdo.dll, to no effect.) Test Case I created this test script to compare PDO vs mysqli. With the old copy of libmysql.dll, it crashes if $use_pdo is true and doesn't if it's false. With the new copy of libmysql.dll, it's the opposite. if ($use_pdo){ $dbc = new PDO("mysql:host=$hostname;dbname=$dbname", $username, $password); echo 'Connected to database<br />'; $sql = "SELECT * FROM `employee`"; $dbc->query($sql); foreach ($dbc->query($sql) as $row){ echo $row['firstname'] . ' ' . $row['lastname'] . "<br>\n"; } } else { $dbc = @mysqli_connect($hostname, $username, $password, $dbname) OR die('Could not connect to MySQL: ' . mysqli_connect_error()); $sql = "SELECT * FROM `employee`"; $result = @mysqli_query($dbc, $sql) or die(mysqli_error($dbc)); while ($row = mysqli_fetch_array($result,MYSQLI_ASSOC)) { echo $row['firstname'] . ' ' . $row['lastname'] . "<br>\n"; } } What does Apache need in order to support both methods of database query?

    Read the article

  • stxxl Assertion `it != root_node_.end()' failed

    - by Fabrizio Silvestri
    I am receiving this assertion failed error when trying to insert an element in a stxxl map. The entire assertion error is the following: resCache: /usr/include/stxxl/bits/containers/btree/btree.h:470: std::pair , bool stxxl::btree::btree::insert(const value_type&) [with KeyType = e_my_key, DataType = unsigned int, CompareType = comp_type, unsigned int RawNodeSize = 16384u, unsigned int RawLeafSize = 131072u, PDAllocStrategy = stxxl::SR, stxxl::btree::btree::value_type = std::pair]: Assertion `it != root_node_.end()' failed. Aborted Any idea? Edit: Here's the code fragment void request_handler::handle_request(my_key& query, reply& rep) { c_++; strip(query.content); std::cout << "Received query " << query.content << " by thread " << boost::this_thread::get_id() << ". It is number " << c_ << "\n"; strcpy(element.first.content, query.content); element.second = c_; testcache_.insert(element); STXXL_MSG("Records in map: " << testcache_.size()); } Edit2 here's more details (I omit constants, e.g. MAX_QUERY_LEN) struct comp_type : std::binary_function<my_key, my_key, bool> { bool operator () (const my_key & a, const my_key & b) const { return strncmp(a.content, b.content, MAX_QUERY_LEN) < 0; } static my_key max_value() { return max_key; } static my_key min_value() { return min_key; } }; typedef stxxl::map<my_key, my_data, comp_type> cacheType; cacheType testcache_; request_handler::request_handler() :testcache_(NODE_CACHE_SIZE, LEAF_CACHE_SIZE) { c_ = 0; memset(max_key.content, (std::numeric_limits<unsigned char>::max)(), MAX_QUERY_LEN); memset(min_key.content, (std::numeric_limits<unsigned char>::min)(), MAX_QUERY_LEN); testcache_.enable_prefetching(); STXXL_MSG("Records in map: " << testcache_.size()); }

    Read the article

  • SQL Cartesian product joining table to itself and inserting into existing table

    - by Emma
    I am working in phpMyadmin using SQL. I want to take the primary key (EntryID) from TableA and create a cartesian product (if I am using the term correctly) in TableB (empty table already created) for all entries which share the same value for FieldB in TableA, except where TableA.EntryID equals TableA.EntryID So, for example, if the values in TableA were: TableA.EntryID TableA.FieldB 1 23 2 23 3 23 4 25 5 25 6 25 The result in TableB would be: Primary key EntryID1 EntryID2 FieldD (Default or manually entered) 1 1 2 Default value 2 1 3 Default value 3 2 1 Default value 4 2 3 Default value 5 3 1 Default value 6 3 2 Default value 7 4 5 Default value 8 4 6 Default value 9 5 4 Default value 10 5 6 Default value 11 6 4 Default value 12 6 5 Default value I am used to working in Access and this is the first query I have attempted in SQL. I started trying to work out the query and got this far. I know it's not right yet, as I’m still trying to get used to the syntax and pieced this together from various articles I found online. In particular, I wasn’t sure where the INSERT INTO text went (to create what would be an Append Query in Access). SELECT EntryID FROM TableA.EntryID TableA.EntryID WHERE TableA.FieldB=TableA.FieldB TableA.EntryID<>TableA.EntryID INSERT INTO TableB.EntryID1 TableB.EntryID2 After I've got that query right, I need to do a TRIGGER query (I think), so if an entry changes it's value in TableA.FieldB (changing it’s membership of that grouping to another grouping), the cartesian product will be re-run on THAT entry, unless TableB.FieldD = valueA or valueB (manually entered values). I have been using the Designer Tab. Does there have to be a relationship link between TableA and TableB. If so, would it be two links from the EntryID Primary Key in TableA, one to each EntryID in TableB? I assume this would not work because they are numbered EntryID1 and EntryID2 and the name needs to be the same to set up a relationship? If you can offer any suggestions, I would be very grateful. Research: http://www.fluffycat.com/SQL/Cartesian-Joins/ Cartesian Join example two Q: You said you can have a Cartesian join by joining a table to itself. Show that! Select * From Film_Table T1, Film_Table T2;

    Read the article

  • PHP problems when transfering code from Windows to OS X

    - by Makka95
    I have recently bought a new MacBook Pro. Before I had my MacBook Pro I was working on a website on my desktop computer. And now I want to transfer this code to my new MacBook Pro. The problem is that when I transfered the code (I put it on Dropbox and simply downloaded it on my MacBook Pro) I started to see lots of error messages in my PHP code. The error message I”m receiving is: Warning: Cannot modify header information - headers already sent by (output started at /some/file.php:1) in /some/file.php on line 23 I have done some research on this and it seems that this error is most frequently caused by a new line, simple whitespace or any output before the <?php sign. I have looked through all the places where I have cookies that are being sent in the HTTP request and also where I'm using the header() function. I haven’t detected any output or whitespace that possibly could interfere and cause this problem. Noteworthy is that the error always says that the output is started at line 1. Which got me thinking if there is some kind of coding differences in the way that the Mac OS X and Windows operating systems handle new lines or white spaces? Or could the Dropbox transfer messed something up? The code on one of the sites(login.php) which produces the error: <?php include "mysql_database.php"; login(); $id = $_SESSION['Loggedin']; setcookie("login", $id, (time()+60*60*24*30)); header('Location: ' . $_SERVER['HTTP_REFERER']); ?> login function: function login() { $connection = connecttodatabase(); $pass = ""; $user = ""; $query = ""; if (isset($_POST['user']) && $_POST['user'] != null) { $user = $_POST['user']; if (isset($_POST['pass']) && $_POST['pass'] != null) { $pass = md5($_POST['pass']); $query = "SELECT ID FROM Anvandare WHERE Nickname='$user' AND Password ='$pass'"; } } if ($query != "") { $id = $connection->query($query); $id = mysqli_fetch_assoc($id); $id = $id['ID']; $_SESSION['Loggedin'] = $id; } closeconnection($connection); } Complete error: Warning: Cannot modify header information - headers already sent by (output started at /Users/name/GitHub/website/login.php:1) in /Users/namn/GitHub/website/login.php on line 9

    Read the article

  • PHP class function.

    - by Jordan Pagaduan
    What is wrong with this code? <?php class users { var $user_id, $f_name, $l_name, $db_host, $db_user, $db_name, $db_table; function user_input() { $this->$db_host = 'localhost'; $this->$db_user = 'root'; $this->$db_name = 'input_oop'; $this->$db_table = 'users'; } function userInput($f_name, $l_name) { $dbc = mysql_connect($this->db_host , $this->db_user, "") or die ("Cannot connect to database : " .mysql_error()); mysql_select_db($this->db_name) or die (mysql_error()); $query = "insert into $this->db_table values (NULL, \"$f_name\", \"$l_name\")"; $result = mysql_query($query); if(!$result) die (mysql_error()); $this->userID = mysql_insert_id(); mysql_close($dbc); $this->first_name = $f_name; $this->last_name = $l_name; } function userUpdate($new_f_name, $new_l_name) { $dbc = mysql_connect($this->db_host, $this->db_user, "") or die (mysql_error()); mysql_select_db($this->db_name) or die (mysql_error()); $query = "UPDATE $this->db_table set = \"$new_f_name\" , \"$new_l_name\" WHERE user_id = \"$this->user_id\""; $result = mysql_query($query); $this->f_name = $new_f_name; $this->l_name = $new_l_name; $this->user_id = $user_id; mysql_close($dbc); } function userDelete() { $dbc = mysql_connect($this->db_host, $this->db_user, "") or die (mysql_error()); mysql_select_db($this->db_name) or die (mysql_error()); $query = "DELETE FROM $this->db_table WHERE $user_id = \"$this->user_id\""; mysql_close($dbc); } } ?> The error is: Warning: mysql_connect() [function.mysql-connect]: Access denied for user 'ODBC'@'localhost' (using password: NO) in C:\xampp\htdocs\jordan_pagaduan\class.php on line 21 Cannot connect to database : Access denied for user 'ODBC'@'localhost' (using password: NO) The code cannot define this "$this->db_host" as "localhost".

    Read the article

  • Are python list comprehensions always a good programming practice?

    - by dln385
    To make the question clear, I'll use a specific example. I have a list of college courses, and each course has a few fields (all of which are strings). The user gives me a string of search terms, and I return a list of courses that match all of the search terms. This can be done in a single list comprehension or a few nested for loops. Here's the implementation. First, the Course class: class Course: def __init__(self, date, title, instructor, ID, description, instructorDescription, *args): self.date = date self.title = title self.instructor = instructor self.ID = ID self.description = description self.instructorDescription = instructorDescription self.misc = args Every field is a string, except misc, which is a list of strings. Here's the search as a single list comprehension. courses is the list of courses, and query is the string of search terms, for example "history project". def searchCourses(courses, query): terms = query.lower().strip().split() return tuple(course for course in courses if all( term in course.date.lower() or term in course.title.lower() or term in course.instructor.lower() or term in course.ID.lower() or term in course.description.lower() or term in course.instructorDescription.lower() or any(term in item.lower() for item in course.misc) for term in terms)) You'll notice that a complex list comprehension is difficult to read. I implemented the same logic as nested for loops, and created this alternative: def searchCourses2(courses, query): terms = query.lower().strip().split() results = [] for course in courses: for term in terms: if (term in course.date.lower() or term in course.title.lower() or term in course.instructor.lower() or term in course.ID.lower() or term in course.description.lower() or term in course.instructorDescription.lower()): break for item in course.misc: if term in item.lower(): break else: continue break else: continue results.append(course) return tuple(results) That logic can be hard to follow too. I have verified that both methods return the correct results. Both methods are nearly equivalent in speed, except in some cases. I ran some tests with timeit, and found that the former is three times faster when the user searches for multiple uncommon terms, while the latter is three times faster when the user searches for multiple common terms. Still, this is not a big enough difference to make me worry. So my question is this: which is better? Are list comprehensions always the way to go, or should complicated statements be handled with nested for loops? Or is there a better solution altogether?

    Read the article

  • Speedstep and Intel x5570?

    - by sajal
    Hi, My new server has 2 x X5570 CPUs. Now here is the output of grep -i hz /proc/cpuinfo model name : Intel(R) Xeon(R) CPU X5570 @ 2.93GHz cpu MHz : 1600.231 model name : Intel(R) Xeon(R) CPU X5570 @ 2.93GHz cpu MHz : 1600.231 model name : Intel(R) Xeon(R) CPU X5570 @ 2.93GHz cpu MHz : 1600.231 model name : Intel(R) Xeon(R) CPU X5570 @ 2.93GHz cpu MHz : 1600.231 model name : Intel(R) Xeon(R) CPU X5570 @ 2.93GHz cpu MHz : 1600.231 model name : Intel(R) Xeon(R) CPU X5570 @ 2.93GHz cpu MHz : 1600.231 model name : Intel(R) Xeon(R) CPU X5570 @ 2.93GHz cpu MHz : 1600.231 model name : Intel(R) Xeon(R) CPU X5570 @ 2.93GHz cpu MHz : 1600.231 model name : Intel(R) Xeon(R) CPU X5570 @ 2.93GHz cpu MHz : 1600.231 model name : Intel(R) Xeon(R) CPU X5570 @ 2.93GHz cpu MHz : 1600.231 model name : Intel(R) Xeon(R) CPU X5570 @ 2.93GHz cpu MHz : 1600.231 model name : Intel(R) Xeon(R) CPU X5570 @ 2.93GHz cpu MHz : 1600.231 model name : Intel(R) Xeon(R) CPU X5570 @ 2.93GHz cpu MHz : 1600.231 model name : Intel(R) Xeon(R) CPU X5570 @ 2.93GHz cpu MHz : 1600.231 model name : Intel(R) Xeon(R) CPU X5570 @ 2.93GHz cpu MHz : 1600.231 model name : Intel(R) Xeon(R) CPU X5570 @ 2.93GHz cpu MHz : 1600.231 It always remains the same.. no matter how much load is mysql or any other app hogging. Even when mysql eats 2 or 3 CPUs at 100% each, the output of cpuinfo is the same. If fact mysql performance for some heavy inserts is poorer than my old E5430 server. Any clues? I contacted the server provider, they tried turning off SpeedStep and still we see the same results. Any insights would be helpful cause I am paying heavily for this box and would love to milk all juice i can.

    Read the article

  • Custom SNMP Cacti Data Source fails to update

    - by Andrew Wilkinson
    I'm trying to create a custom SNMP datasource for Cacti but despite everything I can check being correct, it is not creating the rrd file, or updating it even when I create it. Other, standard SNMP sources are working correctly so it's not SNMP or permissions that are the problem. I've created a new Data Query, which when I click on "Verbose Query" on the device screen returns the following: + Running data query [10]. + Found type = '3' [SNMP Query]. + Found data query XML file at '/volume1/web/cacti/resource/snmp_queries/syno_volume_stats.xml' + XML file parsed ok. + missing in XML file, 'Index Count Changed' emulated by counting oid_index entries + Executing SNMP walk for list of indexes @ '.1.3.6.1.2.1.25.2.3.1.3' Index Count: 8 + Index found at OID: '.1.3.6.1.2.1.25.2.3.1.3.1' value: 'Physical memory' + Index found at OID: '.1.3.6.1.2.1.25.2.3.1.3.3' value: 'Virtual memory' + Index found at OID: '.1.3.6.1.2.1.25.2.3.1.3.6' value: 'Memory buffers' + Index found at OID: '.1.3.6.1.2.1.25.2.3.1.3.7' value: 'Cached memory' + Index found at OID: '.1.3.6.1.2.1.25.2.3.1.3.10' value: 'Swap space' + Index found at OID: '.1.3.6.1.2.1.25.2.3.1.3.31' value: '/' + Index found at OID: '.1.3.6.1.2.1.25.2.3.1.3.32' value: '/volume1' + Index found at OID: '.1.3.6.1.2.1.25.2.3.1.3.33' value: '/opt' + index_parse at OID: '.1.3.6.1.2.1.25.2.3.1.3.1' results: '1' + index_parse at OID: '.1.3.6.1.2.1.25.2.3.1.3.3' results: '3' + index_parse at OID: '.1.3.6.1.2.1.25.2.3.1.3.6' results: '6' + index_parse at OID: '.1.3.6.1.2.1.25.2.3.1.3.7' results: '7' + index_parse at OID: '.1.3.6.1.2.1.25.2.3.1.3.10' results: '10' + index_parse at OID: '.1.3.6.1.2.1.25.2.3.1.3.31' results: '31' + index_parse at OID: '.1.3.6.1.2.1.25.2.3.1.3.32' results: '32' + index_parse at OID: '.1.3.6.1.2.1.25.2.3.1.3.33' results: '33' + Located input field 'index' [walk] + Executing SNMP walk for data @ '.1.3.6.1.2.1.25.2.3.1.3' + Found item [index='Physical memory'] index: 1 [from value] + Found item [index='Virtual memory'] index: 3 [from value] + Found item [index='Memory buffers'] index: 6 [from value] + Found item [index='Cached memory'] index: 7 [from value] + Found item [index='Swap space'] index: 10 [from value] + Found item [index='/'] index: 31 [from value] + Found item [index='/volume1'] index: 32 [from value] + Found item [index='/opt'] index: 33 [from value] + Located input field 'volsizeunit' [walk] + Executing SNMP walk for data @ '.1.3.6.1.2.1.25.2.3.1.4' + Found item [volsizeunit='1024 Bytes'] index: 1 [from value] + Found item [volsizeunit='1024 Bytes'] index: 3 [from value] + Found item [volsizeunit='1024 Bytes'] index: 6 [from value] + Found item [volsizeunit='1024 Bytes'] index: 7 [from value] + Found item [volsizeunit='1024 Bytes'] index: 10 [from value] + Found item [volsizeunit='4096 Bytes'] index: 31 [from value] + Found item [volsizeunit='4096 Bytes'] index: 32 [from value] + Found item [volsizeunit='4096 Bytes'] index: 33 [from value] + Located input field 'volsize' [walk] + Executing SNMP walk for data @ '.1.3.6.1.2.1.25.2.3.1.5' + Found item [volsize='1034712'] index: 1 [from value] + Found item [volsize='3131792'] index: 3 [from value] + Found item [volsize='1034712'] index: 6 [from value] + Found item [volsize='775904'] index: 7 [from value] + Found item [volsize='2097080'] index: 10 [from value] + Found item [volsize='612766'] index: 31 [from value] + Found item [volsize='1439812394'] index: 32 [from value] + Found item [volsize='1439812394'] index: 33 [from value] + Located input field 'volused' [walk] + Executing SNMP walk for data @ '.1.3.6.1.2.1.25.2.3.1.6' + Found item [volused='1022520'] index: 1 [from value] + Found item [volused='1024096'] index: 3 [from value] + Found item [volused='32408'] index: 6 [from value] + Found item [volused='775904'] index: 7 [from value] + Found item [volused='1576'] index: 10 [from value] + Found item [volused='148070'] index: 31 [from value] + Found item [volused='682377865'] index: 32 [from value] + Found item [volused='682377865'] index: 33 [from value] AS you can see it appears to be returning the correct data. I've also set up data templates and graph templates to display the data. The create graphs for a device screen shows the correct data, and when selecting one row can clicking create a new data source and graph are created. Unfortunately the data source is never updated. Increasing the poller log level shows that it appears to not even be querying the data source, despite it being used? What should my next steps to debug this issue be?

    Read the article

  • How can a CentOS 6 guest running in VirtualBox be configured as a LAMP server that can be accessed from the Windows host?

    - by jtt89
    I was able to conect Centos6 on Virtual Box to Windows (I can ping in both directions) with Host-only Adapter (for connection between the two) and NAT Adapter (to enable Linux on VB to connect to the Internet). I want to set up httpd, mysql and vsftpd servers and in the end easily connect to httpd from Windows based browser and ftp server with a Windows based client as well. I would also want to have access through SSH. I have a general idea of the steps that are involved, but there is also a configuration that I am not sure about at this point. Lets say I follow these steps: yum install httpd yum install php php-pear php-mysql yum install mysql-server mysql_secure_installation yum install vsftpd yum install mod_ssl Technically I have everything installed, but what would be the next steps that I need to take (from the networking point of view, so to speak) to get it all working)? I know I need to configure, at least Apache, and ftp server, but I am not sure how is it gonna work; like where am I gonna be uloading the sites (I know this can vary), how am I gonna know what address to use in a browser if I wanna go to a website x, y, z on that installation etc. This sounds like I need to do some kind of DNS setup and I am kind of stuck at this point. If somebody could give me a general outline of what are the things that need to be done that would be great (I was looking at a lot of websites and I know about etc/sysconfig/network, httpd.config - not too much about it on Apache's site, hostname, hostname -f etc; but it is kind of hard to piece it all together at this point). I am gonna be looking at the books also, but they not always reflect the setup that I have too (VirtualBox). Thank you.

    Read the article

  • Freeradius authentication failed for unknown reason

    - by Moein7tl
    I followed this instruction to force freeradius to use mysql database. and run freeradius in debug mod. but it rejects all authentication. mysql database : mysql select * from radcheck; +----+----------+-----------+----+---------+ | id | username | attribute | op | value | +----+----------+-----------+----+---------+ | 1 | test | Password | == | test123 | | 2 | test | Auth-Type | == | Local | +----+----------+-----------+----+---------+ 2 rows in set (0.02 sec) radtest command : # radtest test test123 localhost 0 testing123 Sending Access-Request of id 235 to 127.0.0.1 port 1812 User-Name = "test" User-Password = "test123" NAS-IP-Address = 127.0.0.1 NAS-Port = 0 Message-Authenticator = 0x00000000000000000000000000000000 rad_recv: Access-Reject packet from host 127.0.0.1 port 1812, id=235, length=20 radiusd debug mod log: rad_recv: Access-Request packet from host 127.0.0.1 port 51034, id=235, length=74 User-Name = "test" User-Password = "test123" NAS-IP-Address = 127.0.0.1 NAS-Port = 0 Message-Authenticator = 0xbf111cbbae24fb0f0a558bfa26f53476 # Executing section authorize from file /usr/local/etc/raddb/sites-enabled/default +- entering group authorize {...} ++[preprocess] returns ok ++[chap] returns noop ++[mschap] returns noop ++[digest] returns noop [suffix] No '@' in User-Name = "test", looking up realm NULL [suffix] No such realm "NULL" ++[suffix] returns noop [eap] No EAP-Message, not doing EAP ++[eap] returns noop ++[files] returns noop ++[expiration] returns noop ++[logintime] returns noop [pap] WARNING! No "known good" password found for the user. Authentication may fail because of this. ++[pap] returns noop ERROR: No authenticate method (Auth-Type) found for the request: Rejecting the user Failed to authenticate the user. Using Post-Auth-Type Reject # Executing group from file /usr/local/etc/raddb/sites-enabled/default +- entering group REJECT {...} [attr_filter.access_reject] expand: %{User-Name} - test attr_filter: Matched entry DEFAULT at line 11 ++[attr_filter.access_reject] returns updated Delaying reject of request 20 for 1 seconds Going to the next request Waking up in 0.9 seconds. Sending delayed reject for request 20 Sending Access-Reject of id 235 to 127.0.0.1 port 51034 Waking up in 4.9 seconds. Cleaning up request 20 ID 235 with timestamp +4325 Ready to process requests. where is the problem and how should I solve it?

    Read the article

  • Chroot jail of Nginx and php

    - by sqren
    I'm hosting multiple websites on one VPS, and want to chroot each website, eg. /chroot/website1 /chroot/website2 I'm using makejail, which is a highlevel tool, for creating the jails, and copying the libraries and dependencies. Easy peasy. Each website will need nginx, php and mysql. For php I'm using php5-fpm which actually supports chroot by configuration, however I'm not using this (maybe I should?) My question is which approach of the following three is the better: 1) Every website will have its own seperated instance of nginx, php and mysql. The downside is, that each webserver + php has to listen to a different port. I also need a "master" nginx web server in front of them, reverse proxying to the chrooted servers behind it. Probably most secure, but also most advanced. 2) I don't make any chroot jails manually. I setup one nginx web server, that proxies php requests to php-fpm, on different ports. I can have multiple php-fpm configurations each with is own chroot'ed folder. This is quite managable - however only php will be chrooted. Not the actual webserver. Is this secure enough. Also, I tried this option out, and it seems I will need to use TCP instead of sockets for connecting to MySQL. 3) You tell me ;) I'm quite new to chroot jailing, so please correct me if I'm wrong in my assumptions. I've been reading all the tutorials I could find, however, I find the market for chroot guides very scarce. Any help or inputs much appreciated!

    Read the article

  • SOGo installation on Mail Server

    - by i.h4d35
    We run a normal mail server on cPanel for web-based email. We've just got a request to add Calendar, address book, tasks functions; mobile capabilities (I'm guessing acces via a mobile client/app); public folders etc. On the client-side, we have some people using webmail, some use MS Outlook and some others use Mozilla Thunderbird. Having looked around, I zeroed in on SOGo, Citadel and kolab as options for this. I read through SOGo's official install guide and also checked here and here. However, I see most of the HowTo's ask installation of MySQL/PgSQL, LDAP, Samba etc. While I can manage installation of Samba (if required), I have no idea if installing LDAP, MySQL etc is really required. Also, any guidance as to how to install on a regular mail server would be appreciated. Sorry if this sounds vague. If any more information is required, I'll be happy to give it. Thanks in advance. Edit: This server in question has always been governed via cPanel (to install PHP, MySQL, configure DNS etc). So I am confused if really need LDAP.

    Read the article

  • Configuring apache and php to handle many connections

    - by Marc
    My preliminary setup is like this. Two QuadCore 8GB servers running debian 6, with php and apache, One QuadCore 16GB server running debian 6, with mysql My plan is to have one 8Gb server to act as a proxy server, using vertx java to handle connections. I will let vertx use HttpClient to send web requests to the second 8GB server. This would have apache installed and use php to deliver any information that it gets from the mysql server on the third, 16GB server. The main reason I want this setup is to have things separated, so the "proxy" will be the only way to access the system, as the other two server will only be reachable from the local network. I can have the vertx proxy handle 5000+ concurrent connections, but, I don't know how to configure apache to handle all the requests coming from the proxy. Php will connect over mysqli with persistent connection pool of 500-800 connections, the mysql server seems not to have any issues on this part. In previous projects, the apache part was always causing issues, no matter how I set it up. I might not fully understand how to setup apache, since normally apache should handle many concurrent connections, but it does seem to now.

    Read the article

  • Postfix connect timing out remotely, working fine locally

    - by Moritz
    Running Postfix on Debian I cannot connect to send mail any more. It worked until approximately a week ago. I do not recall touching the configuration of the server during that time, which makes it difficult for me to find out what the problem is. When connecting from the server to itself it works fine: root@xxxx:~# telnet localhost 25 Trying 127.0.0.1... Connected to localhost.localdomain. Escape character is '^]'. ehlo localhost 220 mail.xxxx.de ESMTP Postfix (Debian/GNU) 250-mail.xxxx.de 250-PIPELINING 250-SIZE 10240000 250-VRFY 250-ETRN 250-STARTTLS 250-ENHANCEDSTATUSCODES 250-8BITMIME 250 DSN quit 221 2.0.0 Bye Connection closed by foreign host. Trying to do the same remotely times out: laptop:~ $ telnet mail.xxxx.de 25 Trying 93.xx.xx.xx... telnet: connect to address 93.xx.xx.xx: Operation timed out telnet: Unable to connect to remote host Configuration is as follows: root@xxxx:~# postconf -n alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases append_dot_mydomain = no biff = no broken_sasl_auth_clients = yes config_directory = /etc/postfix home_mailbox = Maildir/ inet_interfaces = all inet_protocols = ipv4 mailbox_command = mailbox_size_limit = 0 mydestination = localhost.localdomain, localhost.localdomain, localhost myhostname = mail.xxxx.de mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 myorigin = /etc/mailname readme_directory = no recipient_delimiter = + relayhost = smtp_tls_note_starttls_offer = yes smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtp_use_tls = yes smtpd_banner = $myhostname ESMTP $mail_name (Debian/GNU) smtpd_recipient_restrictions = permit_sasl_authenticated,permit_mynetworks,reject_unauth_destination smtpd_sasl_auth_enable = yes smtpd_sasl_exceptions_networks = $mynetworks smtpd_sasl_local_domain = smtpd_sasl_path = private/auth smtpd_sasl_security_options = noanonymous smtpd_sasl_type = dovecot smtpd_tls_CAfile = /etc/postfix/ssl/cacert.pem smtpd_tls_auth_only = no smtpd_tls_cert_file = /etc/postfix/ssl/smtpd.crt smtpd_tls_key_file = /etc/postfix/ssl/smtpd.key smtpd_tls_loglevel = 1 smtpd_tls_received_header = yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_tls_session_cache_timeout = 3600s smtpd_use_tls = yes tls_random_source = dev:/dev/urandom virtual_alias_maps = proxy:mysql:$config_directory/mysql_virtual_alias_maps.cf virtual_gid_maps = static:8 virtual_mailbox_base = /var/vmail virtual_mailbox_domains = proxy:mysql:$config_directory/mysql_virtual_domains_maps.cf virtual_mailbox_maps = proxy:mysql:$config_directory/mysql_virtual_mailbox_maps.cf virtual_minimum_uid = 150 virtual_transport = dovecot Receiving mails is no problem, as is retrieving them remotely. Do you have an idea what I could check next?

    Read the article

  • Setting up a DNS name server for a mass virtual host with Bind9

    - by Dez
    I am trying to set up a chrooted DNS name server in a local LAN like this everyone connected in the LAN can have access to the mass virtual hosts defined for a development ambience without having to edit manually their local /etc/hosts one by one. The mass virtual host is named example.user.dev (VirtualDocumentRoot /home/user/example ) and example.test (DocumentRoot /var/www/example). I set up everything and the /var/log/syslog doesn't show any error, but when checking the DNS with: host -v example.test Doesn't find the host. Also using the dig command I don't receive answer. dig -x example.test ; << DiG 9.5.1-P3 << -x imprimere ;; global options: printcmd ;; Got answer: ;; -HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 47844 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 0 ;; QUESTION SECTION: ;imprimere.in-addr.arpa. IN PTR ;; AUTHORITY SECTION: in-addr.arpa. 600 IN SOA a.root-servers.net. dns-ops.arin.net. 2010042604 1800 900 691200 10800 ;; Query time: 108 msec ;; SERVER: 80.58.0.33#53(80.58.0.33) ;; WHEN: Mon Apr 26 11:15:53 2010 ;; MSG SIZE rcvd: 107 My configuration is the following: /etc/bind/named.conf.local zone "example.test" { type master; allow-query { any; }; file "/etc/bind/zones/master_example.test"; notify yes; }; zone "1.168.192.in-addr.arpa" { type master; allow-query { any; }; file "/etc/bind/zones/master_1.168.192.in-addr.arpa"; notify yes; }; /etc/bind/named.conf.options Note: We have an static IP address so I forward the querys to DNS server to said IP address. options{ directory "/var/cache/bind"; forwarders { 80.34.100.160; }; auth-nxdomain no; listen-on-v6 { any; }; }; /etc/bind/zones/master_example.test $ORIGIN example.test. $TTL 86400 @ IN SOA example.test. root.example.test. ( 201004227 ; serial 28800 ; refresh 14400 ; retry 3600000 ; expire 86400 ) ; min ; TXT "example.test, DNS service" @ IN NS example.test. localhost A 127.0.0.1 example.test. A 192.168.1.52 example A 192.168.1.52 www CNAME example.test. /etc/hosts 127.0.0.1 localhost example 192.168.1.52 localhost example example.test /etc/resolv.conf Note: For Bind I just added the 3 last lines. nameserver 80.58.0.33 nameserver 80.58.61.250 nameserver 80.58.61.254 search example.test search example nameserver 192.168.1.52

    Read the article

  • Unable to Access Localhost after starting Xampp

    - by user7370
    OS: Windows XP Professional, SP 2. Few days back i had xampplite 1.7.1 installed and was able to access localhost and phpmyadmin through browser...... Today it suddenly stopped working. In firefox after i type http://localhost/ nothing happens, just blank white screen. I removed all the files in xampplite folders and re-installed ver 1.7.1 again, it's of no use. Then i installed xampplite 1.7.2 (latest), which i had downloaded from xampp website, again it's of no use. Apache and MySql are running though. I tried using locally installing wordpress, as i have a theme ready and want to convert that design to wordpress, test it and start using it online. Running 'Port-check' on xampp control panel showed this -- RESULT ------ Service -- -- Port -- -- Status -- --------------------------------------------------- Apache (HTTP) -- 80 -- C:\xampplite\apache\bin\httpd.exe Apache (WebDAV) -- 81 -- free Apache (HTTPS) -- 443 -- C:\xampplite\apache\bin\httpd.exe MySQL -- 3306 -- C:\xampplite\mysql\bin\mysqld.exe FileZilla (FTP) -- 21 -- free FileZilla (Admin) -- 14147 -- free Mercury (SMTP) -- 25 -- free Mercury (POP3) -- 110 -- free Mercury (IMAP) -- 143 -- free Mercury (HTTP) -- 2224 -- free Mercury (Finger) -- 79 -- free Mercury (PH) -- 105 -- free Mercury (PopPass) -- 106 -- free Tomcat (AJP/1.3) -- 8009 -- free Tomcat (HTTP) -- 8080 -- free --------------------------------------------- I also Have skype installed but it's not using 'Port 80' (as i have read, this was the issue, but checked under skype option the port is 65013). And when i run file:///C:/xampp/htdocs/index.php - it shows "Something is wrong with the XAMPP installation :-( " Please help with this problem. thanks Sharath kumar

    Read the article

  • MediaWiki migrated from Tiger to Snow Leopard throwing an exceptions

    - by Matt S
    I had an old laptop running Mac OS X 10.4 with macports for web development: Apache 2, PHP 5.3.2, Mysql 5, etc. I got a new laptop running Mac OS X 10.6 and installed macports. I installed the same web development apps: Apache 2, PHP 5.3.2, Mysql 5, etc. All versions the same as my old laptop. A Mediawiki site (version 1.15) was copied over from my old system (via the Migration Assistant). Having a fresh Mysql setup, I dumped my old database and imported it on the new system. When I try to browse to mediawiki's "Special" pages, I get the following exception thrown: Invalid language code requested Backtrace: #0 /languages/Language.php(2539): Language::loadLocalisation(NULL) #1 /includes/MessageCache.php(846): Language::getFallbackFor(NULL) #2 /includes/MessageCache.php(821): MessageCache->processMessagesArray(Array, NULL) #3 /includes/GlobalFunctions.php(2901): MessageCache->loadMessagesFile('/Users/matt/Sit...', false) #4 /extensions/OpenID/OpenID.setup.php(181): wfLoadExtensionMessages('OpenID') #5 [internal function]: OpenIDLocalizedPageName(Array, 'en') #6 /includes/Hooks.php(117): call_user_func_array('OpenIDLocalized...', Array) #7 /languages/Language.php(1851): wfRunHooks('LanguageGetSpec...', Array) #8 /includes/SpecialPage.php(240): Language->getSpecialPageAliases() #9 /includes/SpecialPage.php(262): SpecialPage::initAliasList() #10 /includes/SpecialPage.php(406): SpecialPage::resolveAlias('UserLogin') #11 /includes/SpecialPage.php(507): SpecialPage::getPageByAlias('UserLogin') #12 /includes/Wiki.php(229): SpecialPage::executePath(Object(Title)) #13 /includes/Wiki.php(59): MediaWiki->initializeSpecialCases(Object(Title), Object(OutputPage), Object(WebRequest)) #14 /index.php(116): MediaWiki->initialize(Object(Title), NULL, Object(OutputPage), Object(User), Object(WebRequest)) #15 {main} I tried to step through Mediawiki's code, but it's a mess. There are global variables everywhere. If I change the code slightly to get around the exception, the page comes up blank and there are no errors (implying there are multiple problems). Anyone else get Mediawiki 1.15 working on OS X 10.6 with macports? Anything in the migration from Tiger that could cause a problem? Any clues where to look for answers?

    Read the article

< Previous Page | 608 609 610 611 612 613 614 615 616 617 618 619  | Next Page >