Search Results

Search found 21434 results on 858 pages for 'query master'.

Page 497/858 | < Previous Page | 493 494 495 496 497 498 499 500 501 502 503 504  | Next Page >

  • Is it possible to order by a composite key with JPA and CriteriaBuilder

    - by Kjir
    I would like to create a query using the JPA CriteriaBuilder and I would like to add an ORDER BY clause. This is my entity: @Entity @Table(name = "brands") public class Brand implements Serializable { public enum OwnModeType { OWNER, LICENCED } @EmbeddedId private IdBrand id; private String code; //bunch of other properties } Embedded class is: @Embeddable public class IdBrand implements Serializable { @ManyToOne private Edition edition; private String name; } And the way I am building my query is like this: CriteriaBuilder cb = em.getCriteriaBuilder(); CriteriaQuery<Brand> q = cb.createQuery(Brand.class).distinct(true); Root<Brand> root = q.from(Brand.class); if (f != null) { f.addCriteria(cb, q, root); f.addOrder(cb, q, root, sortCol, ascending); } return em.createQuery(q).getResultList(); And here are the functions called: public void addCriteria(CriteriaBuilder cb, CriteriaQuery<?> q, Root<Brand> r) { } public void addOrder(CriteriaBuilder cb, CriteriaQuery<?> q, Root<Brand> r, String sortCol, boolean ascending) { if (ascending) { q.orderBy(cb.asc(r.get(sortCol))); } else { q.orderBy(cb.desc(r.get(sortCol))); } } If I try to set sortCol to something like "id.edition.number" I get the following error: javax.ejb.EJBException: java.lang.IllegalArgumentException: Unable to resolve attribute [id.name] against path Any idea how I could accomplish that? I tried searching online, but I couldn't find a hint about this... Also would be great if I could do a similar ORDER BY when I have a @ManyToOne relationship (for instance, "id.edition.number")

    Read the article

  • How do I bring forward the SELECTED option in PHP from MySQL?

    - by Derek
    Hi all, In my update form, I want the fields to recall the values that are already stored. This is very simple in a text field, but for my drop down () I'm having trouble with PHP reading the already stored name of user. Here is my query and code: $sql = "SELECT users.user_id, users.name FROM users"; $result = mysql_query($sql, $connection) or die ("Couldn't perform query $sql <br />".mysql_error()); $row = mysql_fetch_array($result);?> <label>Designated Person:</label> <select name="name" id="name"> <option value="<?php echo $row['user_id']?>" SELECTED><?php echo $row['name']?> - Current</option> <?php while($row = mysql_fetch_array($result)) { ?> <option value="<?php echo $row['user_id']; if (isset($_POST['user_id']));?>"><?php echo $row['fullname']?></option> <?php } ?> The result of this displays all of the users (as required) and lets me select a user then perform the change successfully...however the 'SELECTED' is always the first one in my database and never the user that was selected when my activity was added :( !!!

    Read the article

  • How to make safe frequent DataSource switches for AbstractRoutingDataSource?

    - by serg555
    I implemented Dynamic DataSource Routing for Spring+Hibernate according to this article. I have several databases with same structure and I need to select which db will run each specific query. Everything works fine on localhost, but I am worrying about how this will hold up in real web site environment. They are using some static context holder to determine which datasource to use: public class CustomerContextHolder { private static final ThreadLocal<CustomerType> contextHolder = new ThreadLocal<CustomerType>(); public static void setCustomerType(CustomerType customerType) { Assert.notNull(customerType, "customerType cannot be null"); contextHolder.set(customerType); } public static CustomerType getCustomerType() { return (CustomerType) contextHolder.get(); } public static void clearCustomerType() { contextHolder.remove(); } } It is wrapped inside some ThreadLocal container, but what exactly does that mean? What will happen when two web requests call this piece of code in parallel: CustomerContextHolder.setCustomerType(CustomerType.GOLD); //<another user will switch customer type here to CustomerType.SILVER in another request> List<Item> goldItems = catalog.getItems(); Is every web request wrapped into its own thread in Spring MVC? Will CustomerContextHolder.setCustomerType() changes be visible to other web users? My controllers have synchronizeOnSession=true. How to make sure that nobody else will switch datasource until I run required query for current user? Thanks.

    Read the article

  • Losing partitions after every reboot

    - by Winston Smith
    I have an Acer laptop with one hard disk, which up until yesterday had 4 partitions: Recovery Partition (13GB) C: (140GB) D: (130GB) OEM Partition (10GB) I read that the OEM partition has all the stuff needed to restore the laptop to the factory settings, but since I'd already created restore disks and I needed the space, I wanted to get rid of it. Yesterday, I used diskpart to do that. In diskpart, I selected the OEM partition and issued the delete partition override command which removed it. Then I extended the D: partition into the unused space using windows disk management. Everything worked fine, until I rebooted my laptop, at which point the D: drive vanished. Looking in windows disk management again, I can see that there's an OEM partition of 140GB, which is obviously my D: drive. So I used EASEUS Partition Master and assigned a drive letter to the 'OEM' partition and I was able to access my files again. However, every time I reboot, it reverts back. How do I fix this permanently?

    Read the article

  • Is there a way to use Linq projections with extension methods

    - by Acoustic
    I'm trying to use AutoMapper and a repository pattern along with a fluent interface, and running into difficulty with the Linq projection. For what it's worth, this code works fine when simply using in-memory objects. When using a database provider, however, it breaks when constructing the query graph. I've tried both SubSonic and Linq to SQL with the same result. Thanks for your ideas. Here's an extension method used in all scenarios - It's the source of the problem since everything works fine without using extension methods public static IQueryable<MyUser> ByName(this IQueryable<MyUser> users, string firstName) { return from u in users where u.FirstName == firstName select u; } Here's the in-memory code that works fine var userlist = new List<User> {new User{FirstName = "Test", LastName = "User"}}; Mapper.CreateMap<User, MyUser>(); var result = (from u in userlist select Mapper.Map<User, MyUser>(u)) .AsQueryable() .ByName("Test"); foreach (var x in result) { Console.WriteLine(x.FirstName); } Here's the same thing using a SubSonic (or Linq to SQL or whatever) that fails. This is what I'd like to make work somehow with extension methods... Mapper.CreateMap<User, MyUser>(); var result = from u in new DataClasses1DataContext().Users select Mapper.Map<User, MyUser>(u); var final = result.ByName("Test"); foreach(var x in final) // Fails here when the query graph built. { Console.WriteLine(x.FirstName); } The goal here is to avoid having to manually map the generated "User" object to the "MyUser" domain object- in other words, I'm trying to find a way to use AutoMapper so I don't have this kind of mapping code everywhere a database read operation is needed: var result = from u in new DataClasses1DataContext().Users select new MyUser // Can this be avoided with AutoMapper AND extension methods? { FirstName = v.FirstName, LastName = v.LastName };

    Read the article

  • Mount CIFS share with autofs

    - by Phanto
    I have a system running RHEL 5.5, and I am trying to mount a Windows share on a server using autofs. (Due to the network not being ready upon startup, I do not want to utilize fstab.) I am able to mount the shares manually, but autofs is just not mounting them. Here are the files I am working with: At the end of /etc/auto.master, I have: ## Mount this test share: /test /etc/auto.test --timeout=60 In /etc/auto.test, I have: test -fstype=cifs,username=testuser,domain=domain.com,password=password ://server/test I then restart the autofs service. However, this does not work. ls-ing the directory does not return any results. I have followed all these guides on the web, and I either don't understand them, or they.just.don't.work. Thank You

    Read the article

  • Updating Data through Objects

    - by Chacha102
    So, lets say I have a record: $record = new Record(); and lets say I assign some data to that record: $record->setName("SomeBobJoePerson"); How do I get that into the database. Do I..... A) Have the module do it. class Record{ public function __construct(DatabaseConnection $database) { $this->database = $database; } public function setName($name) { $this->database->query("query stuff here"); $this->name = $name; } } B) Run through the modules at the end of the script class Record{ private $changed = false; public function __construct(array $data=array()) { $this->data = $data; } public function setName($name) { $this->data['name'] = $name; $this->changed = true; } public function isChanged() { return $this->changed; } public function toArray() { return $this->array; } } class Updater { public function update(array $records) { foreach($records as $record) { if($record->isChanged()) { $this->updateRecord($record->toArray()); } } } public function updateRecord(){ // updates stuff } }

    Read the article

  • Call to a member function num_rows() on a non-object

    - by Patrick
    I need to get the number of rows of a query (so I can paginate results). As I'm learning codeigniter (and OO php) I wanted to try and chain a -num_rows() to the query, but it doesn't work: //this works: $data['count'] = count($this->events->findEvents($data['date'], $data['keyword'])); //the following doesn't work and generates // Fatal Error: Call to a member function num_rows() on a non-object $data['count2'] = $this->events->findEvents($data['date'], $data['keyword'])->num_rows(); the model returns an array of objects, and I think this is the reason why I can't use a method on it. function findEvents($date, $keyword, $limit = NULL, $offset = NULL) { $data = array(); $this->db->select('events.*, venues.*, events.venue AS venue_id'); $this->db->join('venues', 'events.venue = venues.id'); if ($date) { $this->db->where('date', $date); } if ($keyword) { $this->db->like('events.description', $keyword); $this->db->or_like('venues.description', $keyword); $this->db->or_like('band', $keyword); $this->db->or_like('venues.venue', $keyword); $this->db->or_like('genre', $keyword); } $this->db->order_by('date', 'DESC'); $this->db->order_by('events.priority', 'DESC'); $this->db->limit($limit, $offset); //for pagination purposes $Q = $this->db->get('events'); if ($Q->num_rows() > 0) { foreach ($Q->result() as $row) { $data[] = $row; } } $Q->free_result(); return $data; } Is there anything that i can do to be able to use it? EG, instead of $data[] = $row; I should use another (OO) syntax?

    Read the article

  • How to get nested chain of objects in Linq and MVC2 application?

    - by Anders Svensson
    I am getting all confused about how to solve this problem in Linq. I have a working solution, but the code to do it is way too complicated and circular I think: I have a timesheet application in MVC 2. I want to query the database that has the following tables (simplified): Project Task TimeSegment The relationships are as follows: A project can have many tasks and a task can have many timesegments. I need to be able to query this in different ways. An example is this: A View is a report that will show a list of projects in a table. Each project's tasks will be listed followed by a Sum of the number of hours worked on that task. The timesegment object is what holds the hours. Here's the View: <%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/Report.Master" Inherits="System.Web.Mvc.ViewPage<Tidrapportering.ViewModels.MonthlyReportViewModel>" %> <asp:Content ID="Content1" ContentPlaceHolderID="TitleContent" runat="server"> Månadsrapport </asp:Content> <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server"> <h1> Månadsrapport</h1> <div style="margin-top: 20px;"> <span style="font-weight: bold">Kund: </span> <%: Model.Customer.CustomerName %> </div> <div style="margin-bottom: 20px"> <span style="font-weight: bold">Period: </span> <%: Model.StartDate %> - <%: Model.EndDate %> </div> <div style="margin-bottom: 20px"> <span style="font-weight: bold">Underlag för: </span> <%: Model.Employee %> </div> <table class="mainTable"> <tr> <th style="width: 25%"> Projekt </th> <th> Specifikation </th> </tr> <% foreach (var project in Model.Projects) { %> <tr> <td style="vertical-align: top; padding-top: 10pt; width: 25%"> <%:project.ProjectName %> </td> <td> <table class="detailsTable"> <tr> <th> Aktivitet </th> <th> Timmar </th> <th> Ex moms </th> </tr> <% foreach (var task in project.CurrentTasks) {%> <tr class="taskrow"> <td class="task" style="width: 40%"> <%: task.TaskName %> </td> <td style="width: 30%"> <%: task.TaskHours.ToString()%> </td> <td style="width: 30%"> <%: String.Format("{0:C}", task.Cost)%> </td> </tr> <% } %> </table> </td> </tr> <% } %> </table> <table class="summaryTable"> <tr> <td style="width: 25%"> </td> <td> <table style="width: 100%"> <tr> <td style="width: 40%"> Totalt: </td> <td style="width: 30%"> <%: Model.TotalHours.ToString() %> </td> <td style="width: 30%"> <%: String.Format("{0:C}", Model.TotalCost)%> </td> </tr> </table> </td> </tr> </table> <div class="price"> <table> <tr> <td>Moms: </td> <td style="padding-left: 15px;"> <%: String.Format("{0:C}", Model.VAT)%> </td> </tr> <tr> <td>Att betala: </td> <td style="padding-left: 15px;"> <%: String.Format("{0:C}", Model.TotalCostAndVAT)%> </td> </tr> </table> </div> </asp:Content> Here's the action method: [HttpPost] public ActionResult MonthlyReports(FormCollection collection) { MonthlyReportViewModel vm = new MonthlyReportViewModel(); vm.StartDate = collection["StartDate"]; vm.EndDate = collection["EndDate"]; int customerId = Int32.Parse(collection["Customers"]); List<TimeSegment> allTimeSegments = GetTimeSegments(customerId, vm.StartDate, vm.EndDate); vm.Projects = GetProjects(allTimeSegments); vm.Employee = "Alla"; vm.Customer = _repository.GetCustomer(customerId); vm.TotalCost = vm.Projects.SelectMany(project => project.CurrentTasks).Sum(task => task.Cost); //Corresponds to above foreach vm.TotalHours = vm.Projects.SelectMany(project => project.CurrentTasks).Sum(task => task.TaskHours); vm.TotalCostAndVAT = vm.TotalCost * 1.25; vm.VAT = vm.TotalCost * 0.25; return View("MonthlyReport", vm); } And the "helper" methods: public List<TimeSegment> GetTimeSegments(int customerId, string startdate, string enddate) { var timeSegments = _repository.TimeSegments .Where(timeSegment => timeSegment.Customer.CustomerId == customerId) .Where(timeSegment => timeSegment.DateObject.Date >= DateTime.Parse(startdate) && timeSegment.DateObject.Date <= DateTime.Parse(enddate)); return timeSegments.ToList(); } public List<Project> GetProjects(List<TimeSegment> timeSegments) { var projectGroups = from timeSegment in timeSegments group timeSegment by timeSegment.Task into g group g by g.Key.Project into pg select new { Project = pg.Key, Tasks = pg.Key.Tasks }; List<Project> projectList = new List<Project>(); foreach (var group in projectGroups) { Project p = group.Project; foreach (var task in p.Tasks) { task.CurrentTimeSegments = timeSegments.Where(ts => ts.TaskId == task.TaskId).ToList(); p.CurrentTasks.Add(task); } projectList.Add(p); } return projectList; } Again, as I mentioned, this works, but of course is really complex and I get confused myself just looking at it even now that I'm coding it. I sense there must be a much easier way to achieve what I want. Basically you can tell from the View what I want to achieve: I want to get a collection of projects. Each project should have it's associated collection of tasks. And each task should have it's associated collection of timesegments for the specified date period. Note that the projects and tasks selected must also only be the projects and tasks that have the timesegments for this period. I don't want all projects and tasks that have no timesegments within this period. It seems the group by Linq query beginning the GetProjects() method sort of achieves this (if extended to have the conditions for date and so on), but I can't return this and pass it to the view, because it is an anonymous object. I also tried creating a specific type in such a query, but couldn't wrap my head around that either... I hope there is something I'm missing and there is some easier way to achieve this, because I need to be able to do several other different queries as well eventually. I also don't really like the way I solved it with the "CurrentTimeSegments" properties and so on. These properties don't really exist on the model objects in the first place, I added them in partial classes to have somewhere to put the filtered results for each part of the nested object chain... Any ideas?

    Read the article

  • Mysql return value as 0 in the fetch result.

    - by Karthik
    I have this two tables, -- -- Table structure for table `t1` -- CREATE TABLE `t1` ( `pid` varchar(20) collate latin1_general_ci NOT NULL, `pname` varchar(20) collate latin1_general_ci NOT NULL ) ENGINE=MyISAM DEFAULT CHARSET=latin1 COLLATE=latin1_general_ci; -- -- Dumping data for table `t1` -- INSERT INTO `t1` VALUES ('p1', 'pro1'); INSERT INTO `t1` VALUES ('p2', 'pro2'); -- -------------------------------------------------------- -- -- Table structure for table `t2` -- CREATE TABLE `t2` ( `pid` varchar(20) collate latin1_general_ci NOT NULL, `year` int(6) NOT NULL, `price` int(3) NOT NULL ) ENGINE=MyISAM DEFAULT CHARSET=latin1 COLLATE=latin1_general_ci; -- -- Dumping data for table `t2` -- INSERT INTO `t2` VALUES ('p1', 2009, 50); INSERT INTO `t2` VALUES ('p1', 2010, 60); INSERT INTO `t2` VALUES ('p3', 2007, 200); INSERT INTO `t2` VALUES ('p4', 2008, 501); my query is, SELECT * FROM `t1` LEFT JOIN `t2` ON t1.pid = t2.pid Getting the result, pid pname pid year price p1 pro1 p1 2009 50 p1 pro1 p1 2010 60 p2 pro2 NULL NULL NULL My question is, i want to get the price value is 0 instead of NULL. How can i write the query to getting the price value is 0. Thanks in advance for help.

    Read the article

  • high cpu in IIS

    - by Miki Watts
    Hi all. I'm developing a POS application that has a local database on each POS computer, and communicates with the server using WCF hosted in IIS. The application has been deployed in several customers for over a year now. About a week ago, we've started getting reports from one of our customers that the server that the IIS is hosted on is very slow. When I've checked the issue, I saw the application pool with my process rocket to almost 100% cpu on an 8 cpu server. I've checked the SQL Activity Monitor and network volume, and they showed no significant overload beyond what we usually see. When checking the threads in Process Explorer, I saw lots of threads repeatedly calling CreateApplicationContext. I've tried installing .Net 2.0 SP1, according to some posts I found on the net, but it didn't solve the problem and replaced the function calls with CLRCreateManagedInstance. I'm about to capture a dump using adplus and windbg of the IIS processes and try to figure out what's wrong. Has anyone encountered something like this or has an idea which directory I should check ? p.s. The same version of the application is deployed in another customer, and there it works just fine. I also tried rolling back versions (even very old versions) and it still behaves exactly the same. Edit: well, problem solved, turns out I've had an SQL query in there that didn't limit the result set, and when the customer went over a certain number of rows, it started bogging down the server. Took me two days to find it, because of all the surrounding noise in the logs, but I waited for the night and took a dump then, which immediately showed me the query.

    Read the article

  • Upgrading Active Directory from 2000 to 2008

    - by Doug
    Our config is currently: 1 Windows 2000 domain controller running ISA2000, dhcp, dns 1 Windows 2003 domain controller as main file server, prob cert server as well, dhcp, dns 1 Windows 2008/Exchange2010 domain controller as Exchange server, DHCP,DNS Currently getting FRS errors on files server journalwrap error Currently getting FRS errors on othe DC's can't replicate from above Exchange DC holds Schema, rid,pdc, and infastructure roles File Server holds Domain namaing operation master role WOW, I didn't set this up, just inherited it. Am I right to assume that fixing the FRS errors is #1, what do I need to do for that? set enable journalwrap auto restore in registry? Demote W2000 domain controller, should that have any implications for ISA? We have Forefront to be deployed but that's another day Transfer Domain Nameing Role to Exchange server (I know or think having an Exchange server as DC isn't best practive) We will be getting another server W2008 to replace current file server and I thought it could takeover all roles once deployed Demote W2k3 file server and then raise functional domain level to 2008 Am I missing anything other that the sense to walk away? Thanks

    Read the article

  • RDS Replication across regions

    - by Bryan Migliorisi
    We are using Amazon AWS for our web services but given the recent instabilities in their infrastructure, we are trying to figure out how to run our application across multiple regions for additional redundancy. Ideally, we would run our entire app in a active-active configuration in multiple regions but our main concern is that we are using RDS, which I understand cannot replicate across regions. One possible solution (though we have not tried or proven it would work) would be to do mysqldump or EBS snapshots every hour or so but this would mean that we would be forced to run in an active-passive configuration. Our data would be at most an hour behind. This carries its own issues around data synchronization when we failover and the master comes back up, so its not the best solution. Are there any proven solutions for replicating RDS across regions?

    Read the article

  • Nhibernate - stuck with detached criteria (asp.net mvc 1 with nhibernate 2) c#

    - by Jen
    OK so I can't find a good example of this so I can better understand how to use detached criteria (assuming that's what I want to use in the first place). I have 2 tables. Placement and PlacementSupervisor My PlacementSupervisor table has a FK of PlacementID which relates to Placement.PlacementID - though my nhibernate model class has PlacementSupervisor . Placement (rather than specifically specifying a property of placement ID - not sure if this is important). What I am trying to do is - if values are passed through for the supervisor ID I want to restrict placements with that supervisor id. Have tried: ICriteria query = m_PlacementRepository.QueryAlias("p") .... if (criteria.SupervisorId > 0 && !string.IsNullOrEmpty(criteria.SupervisorTypeId)) { DetachedCriteria entityQuery = DetachedCriteria.For<PlacementSupervisor>("sup") .Add(Restrictions.And( Restrictions.Eq("sup.supervisorId", criteria.SupervisorId), Restrictions.Eq("sup.supervisorTypeId", criteria.SupervisorTypeId) )) .SetProjection(Projections.ProjectionList() .AddPropertyAlias("Placement.PlacementId", "PlacementId") ); query.Add(Subqueries.PropertyIn("p.PlacementId", entityQuery)); } Which just gives me the error: Could not find a matching criteria info provider to: (sup.supervisorId = 5 and sup.supervisorTypeId = U) Firstly supervisorTypeId is a string. Secondly I don't understand how to achieve what I'm trying to do so have just been trying various combinations of projections, and property aliases and subquery options..as I don't get how I'm supposed to join to another table/entity when the FK key sits in the second table. Can someone point me in the right direction. It seems like such an easy thing to do from a data perspective that hopefully I'm just missing something obvious!!

    Read the article

  • How to tell if OpenGL is really working in Ubuntu 10.04

    - by Jonathan
    I have a lenovo S9e running Intel integrated graphics. Here is my lspci output related to the graphics: 00:02.1 Display controller: Intel Corporation Mobile 945GM/GMS/GME, 943/940GML Express Integrated Graphics Controller (rev 03) Subsystem: Lenovo Device 3870 Flags: bus master, fast devsel, latency 0 Memory at f0580000 (32-bit, non-prefetchable) [size=512K] Capabilities: [d0] Power Management version 2 I want to know how I can make sure OpenGL support is running in full on an Ubuntu 10.04 installation. I have a few hints to think that it is not: The "Desktop Effects" will not load Apps such as stardock, when attempting to use OpenGL rendering, will display black boxes instead of transparency In the games Pioneers, the number-tile icons are suspiciously just black circles Windows games running with Wine will only support software rendering, not hardware rendering When I boot into a Knoppix LiveCD, the desktop effects do work, splendidly, meaning compiz detects my computer as capable. My problem with troubleshooting is that Canonical has basically eliminated the conf-file-based mechanism of X11 as far as I can tell, thus making it even harder to ensure graphics modules are loading properly. How do I debug and test OpenGL on m Ubuntu 10.04 installation?

    Read the article

  • Git and Amazon EC2 public key denied

    - by MrNart
    I had git working before on /var/html/projectfolder and realized it was a security risk so I made a new folder /projects from the root folder and tried to replicate what I did and now it doesnt work. Here is the backlog of what I did for my local machine and EC2 - server Server-EC2 1.I added my public key to the authorized_user file in ~/.ssh folder 2.Create a bare repository git init --bare 3.Change folder permissions to sudo chgrp -R ec2-user * sudo chmod -R g+ws * Local Machine create a local repository with git init touch, add, commit readme file pointed origin master to ec2 via git remote add origin ssh://ec2-user@remote-ip/path/to/folder This is my output: Permission Denied (publickey) fatal: The remote end hung up unexpectedly

    Read the article

  • Facebook API - delete status

    - by Simon R
    In PHP, I'm using curl to send a delete to the fb graph api - and yet I'm getting the following error; {"error":{"type":"GraphMethodException","message":"Unsupported delete request."}} The code I'm using is; $ch = curl_init("https://graph.facebook.com/" . $status_id . ""); curl_setopt($ch, CURLOPT_VERBOSE, 1); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); curl_setopt($ch, CURLOPT_HEADER, 0); curl_setopt($ch, CURLOPT_TIMEOUT, 120); curl_setopt($ch, CURLOPT_POST, 1); curl_setopt($ch, CURLOPT_POSTFIELDS, $query); curl_setopt($ch, CURLOPT_CUSTOMREQUEST, "DELETE"); curl_setopt($ch, CURLOPT_SSL_VERIFYHOST, 1); curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, 0); curl_setopt($ch, CURLOPT_CAINFO, NULL); curl_setopt($ch, CURLOPT_CAPATH, NULL); curl_setopt($ch, CURLOPT_FOLLOWLOCATION, 0); $result = curl_exec($ch); echo $result; $query contains the access token.

    Read the article

  • if there are multiple kernel module can drive the same device, what is the rule to choose from them?

    - by Dyno Fu
    both pcnet32 and vmxnet can drive the device. $ lspci -k ... 02:01.0 Ethernet controller: Advanced Micro Devices [AMD] 79c970 [PCnet32 LANCE] (rev 10) Subsystem: Advanced Micro Devices [AMD] 79c970 [PCnet32 LANCE] Flags: bus master, medium devsel, latency 64, IRQ 19 I/O ports at 2000 [size=128] [virtual] Expansion ROM at dc400000 [disabled] [size=64K] Kernel driver in use: vmxnet Kernel modules: vmxnet, pcnet32 both kernel modules are loaded, $ lsmod | grep net pcnet32 32644 0 vmxnet 17696 0 mii 5212 1 pcnet32 as you see, kernel driver in use is vmxnet. is there any policy/algorithm in kernel how to choose from the candidates?

    Read the article

  • Mysql select - improve performance

    - by realshadow
    Hey, I am working on an e-shop which sells products only via loans. I display 10 products per page in any category, each product has 3 different price tags - 3 different loan types. Everything went pretty well during testing time, query execution time was perfect, but today when transfered the changes to the production server, the site "collapsed" in about 2 minutes. The query that is used to select loan types sometimes hangs for ~10 seconds and it happens frequently and thus it cant keep up and its hella slow. The table that is used to store the data has approximately 2 milion records and each select looks like this: SELECT * FROM products_loans WHERE KOD IN("X17/Q30-10", "X17/12", "X17/5-24") AND 369.27 BETWEEN CENA_OD AND CENA_DO; 3 loan types and the price that needs to be in range between CENA_OD and CENA_DO, thus 3 rows are returned. But since I need to display 10 products per page, I need to run it trough a modified select using OR, since I didnt find any other solution to this. I have asked about it here, but got no answer. As mentioned in the referencing post, this has to be done separately since there is no column that could be used in a join (except of course price and code, but that ended very, very badly). Here is the show create table, kod and CENA_OD/CENA_DO very indexed via INDEX. CREATE TABLE `products_loans` ( `KOEF_ID` bigint(20) NOT NULL, `KOD` varchar(30) NOT NULL, `AKONTACIA` int(11) NOT NULL, `POCET_SPLATOK` int(11) NOT NULL, `koeficient` decimal(10,2) NOT NULL default '0.00', `CENA_OD` decimal(10,2) default NULL, `CENA_DO` decimal(10,2) default NULL, `PREDAJNA_CENA` decimal(10,2) default NULL, `AKONTACIA_SUMA` decimal(10,2) default NULL, `TYP_VYHODY` varchar(4) default NULL, `stage` smallint(6) NOT NULL default '1', PRIMARY KEY (`KOEF_ID`), KEY `CENA_OD` (`CENA_OD`), KEY `CENA_DO` (`CENA_DO`), KEY `KOD` (`KOD`), KEY `stage` (`stage`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 And also selecting all loan types and later filtering them trough php doesnt work good, since each type has over 50k records and the select takes too much time as well... Any ides about improving the speed are appreciated.

    Read the article

  • How to Merge Data From Multiple Excel Files into a Single Excel File or Access Database?

    - by lalabeans
    I have a few dozen excel files which are all of the same format (i.e. 4 worksheets per Excel file). I need to combine all the files into 1 master file which must have just 2 of the 4 worksheets. The corresponding worksheets from each Excel file are named exactly the same as are the column headers. While each file is structured the same, the information within sheet 1 and 2 (for example) is different. So it can’t be combined into one file with everything in one sheet! I've never used VBA before and I'm wondering where I might start this task!

    Read the article

  • Few questions about SLI

    - by toomanyairmiles
    Hi all, thanks in advance for your help. I've just added a second card to my system so I can add a third monitor. I'd got as far as determining both cards need to use the same driver (after a blind alley with another cheap ATi card) so I'm now the proud owner of a second BFG 9800 GTX+ card. One is a BFG OCX and the other an BFG OC (small difference in clock speeds but they are in all other respects the same) but wanted to know the following:- 1) Is it worth adding the SLI connector, will it really boost overall performance (I'm guessing that the OCX card will then perform as the OC card does)? 2) Are SLI connectors (the one's that run across the top of the cards) motherboard or manufacturer specific? 3) If I do SLI the cards will I still be able to use all four monitor connectors or just the two on the master card? I'm not a gamer, I'm an IA and web designer so the system is mostly for Photoshop and Illustrator work and the occasional knock around in command and conquer.

    Read the article

  • What are performance limits of a database?

    - by Tommy
    What are some rough performance limits (read/s, write/s) for a single database server (no master-slave architecture), assuming storage on disk? How many read/s, write/s, depending on the kind of disk? (SSD vs non-SSD) , assuming simple operations (select one row by primary key, update one row, correctly indexed). I assume this limit is dependent on disk seek/write. EDIT: My question is more about getting rough metrics of the number of operations a database supports: to be able to know for example, if a new feature triggering 300 inserts/s can be supported without scaling out with additional servers.

    Read the article

  • Alternatives to the Entity Framework for Serving/Consuming an OData Interface

    - by Egahn
    I'm researching how to set up an OData interface to our database. I would like to be able to pull/query data from our DB into Excel, as a start. Eventually I would like to have Excel run queries and pull data over HTTP from a remote client, including authentication, etc. I've set up a working (rickety) prototype so far, using the ADO.NET Entity Data Model wizard in Visual Studio, and VSTO to create a test Excel worksheet with a button to pull from that ADO.NET interface. This works OK so far, and I can query the DB using Linq through the entities/objects that are created by the ADO.NET EDM wizard. However, I have started to run into some problems with this approach. I've been finding the Entity Framework difficult to work with (and in fact, also difficult to research solutions to, as there's a lot of chaff out there regarding it and older versions of it). An example of this is my being unable to figure out how to set the SQL command timeout (as opposed to the HTTP request timeout) on the DataServiceContext object that the wizard generates for my schema, but that's not the point of my question. The real question I have is, if I want to use OData as my interface standard, am I stuck with the Entity Framework? Are there any other solutions out there (preferably open source) which can set up, serve and consume an OData interface, and are easier to work with and less bloated than the Entity Framework? I have seen mention of NHibernate as an alternative, but most of the comparison threads I've seen are a few years old. Are there any other alternatives out there now? Thanks very much!

    Read the article

  • How to get IQueryable<> from stored procedure (entity framework)

    - by mmcteam
    I want to get IQueryable<> result when executing stored procedure. Here is peace of code that works fine: IQueryable<SomeEntitiy> someEntities; var globbalyFilteredSomeEntities = from se in m_Entities.SomeEntitiy where se.GlobalFilter == 1234 select se; I can use this to apply global filter, and later use result in such way result = globbalyFilteredSomeEntities .OrderByDescending(se => se.CreationDate) .Skip(500) .Take(10); What I want to do - use some stored procedures in global filter. I tried: Add stored procedure to m_Entities, but it returns IEnumerable<> and executes sp immediately: var globbalyFilteredSomeEntities = from se in m_Entities.SomeEntitiyStoredProcedure(1234); Materialize query using EFExtensions library, but it is IEnumerable<>. If I use AsQueryable() and OrderBy(), Skip(), Take() and after that ToList() to execute that query - I get exception that DataReader is open and I need to close it first(can't paste error - it is in russian). var globbalyFilteredSomeEntities = m_Entities.CreateStoreCommand("exec SomeEntitiyStoredProcedure(1234)") .Materialize<SomeEntitiy>(); //.AsQueryable() //.OrderByDescending(se => se.CreationDate) //.Skip(500) //.Take(10) //.ToList(); Also just skipping .AsQueryable() is not helpful - same exception.

    Read the article

  • Cannot boot from a hard drive

    - by Martin Melka
    I have a problem booting from a hdd. I used to have it as my main drive before I bought an SSD, so I had been able to boot from it. But for some reason, now, half a year later, I can't get it to work. I completely erased it, deleting data and partitioning (using EASEUS Partition Master), then I installed Kubuntu (without changing anything in the installer), but it simply won't boot up. It always boots the drive with Windows and when I unplug this drive, it only gives me an error "PXE-E61: Media test failure, check cable", I guess it's trying to boot from LAN. I tried installing the system on a freshly deleted drive, without any other drives plugged in the pc, but the problems persist. This is how the drives look now (first one has Windows 7 installed, the second one Kubuntu): I am lost. I mean, after doing a fresh wipe and a clean install without altering anything, it should work. But it doesn't. What can be wrong here? Thanks

    Read the article

< Previous Page | 493 494 495 496 497 498 499 500 501 502 503 504  | Next Page >