Code Monkey home page Code Monkey logo

mgwr's Introduction

Python Spatial Analysis Library

Unit Tests PyPI version Anaconda-Server Badge Discord Code style: black DOI

PySAL, the Python spatial analysis library, is an open source cross-platform library for geospatial data science with an emphasis on geospatial vector data written in Python. It supports the development of high level applications for spatial analysis, such as

  • detection of spatial clusters, hot-spots, and outliers
  • construction of graphs from spatial data
  • spatial regression and statistical modeling on geographically embedded networks
  • spatial econometrics
  • exploratory spatio-temporal data analysis

PySAL Components

PySAL is a family of packages for spatial data science and is divided into four major components:

Lib

solve a wide variety of computational geometry problems including graph construction from polygonal lattices, lines, and points, construction and interactive editing of spatial weights matrices & graphs - computation of alpha shapes, spatial indices, and spatial-topological relationships, and reading and writing of sparse graph data, as well as pure python readers of spatial vector data. Unike other PySAL modules, these functions are exposed together as a single package.

  • libpysal : libpysal provides foundational algorithms and data structures that support the rest of the library. This currently includes the following modules: input/output (io), which provides readers and writers for common geospatial file formats; weights (weights), which provides the main class to store spatial weights matrices, as well as several utilities to manipulate and operate on them; computational geometry (cg), with several algorithms, such as Voronoi tessellations or alpha shapes that efficiently process geometric shapes; and an additional module with example data sets (examples).

Explore

The explore layer includes modules to conduct exploratory analysis of spatial and spatio-temporal data. At a high level, packages in explore are focused on enabling the user to better understand patterns in the data and suggest new interesting questions rather than answer existing ones. They include methods to characterize the structure of spatial distributions (either on networks, in continuous space, or on polygonal lattices). In addition, this domain offers methods to examine the dynamics of these distributions, such as how their composition or spatial extent changes over time.

  • esda : esda implements methods for the analysis of both global (map-wide) and local (focal) spatial autocorrelation, for both continuous and binary data. In addition, the package increasingly offers cutting-edge statistics about boundary strength and measures of aggregation error in statistical analyses

  • giddy : giddy is an extension of esda to spatio-temporal data. The package hosts state-of-the-art methods that explicitly consider the role of space in the dynamics of distributions over time

  • inequality : inequality provides indices for measuring inequality over space and time. These comprise classic measures such as the Theil T information index and the Gini index in mean deviation form; but also spatially-explicit measures that incorporate the location and spatial configuration of observations in the calculation of inequality measures.

  • momepy : momepy is a library for quantitative analysis of urban form - urban morphometrics. It aims to provide a wide range of tools for a systematic and exhaustive analysis of urban form. It can work with a wide range of elements, while focused on building footprints and street networks. momepy stands for Morphological Measuring in Python.

  • pointpats : pointpats supports the statistical analysis of point data, including methods to characterize the spatial structure of an observed point pattern: a collection of locations where some phenomena of interest have been recorded. This includes measures of centrography which provide overall geometric summaries of the point pattern, including central tendency, dispersion, intensity, and extent.

  • segregation : segregation package calculates over 40 different segregation indices and provides a suite of additional features for measurement, visualization, and hypothesis testing that together represent the state-of-the-art in quantitative segregation analysis.

  • spaghetti : spaghetti supports the the spatial analysis of graphs, networks, topology, and inference. It includes functionality for the statistical testing of clusters on networks, a robust all-to-all Dijkstra shortest path algorithm with multiprocessing functionality, and high-performance geometric and spatial computations using geopandas that are necessary for high-resolution interpolation along networks, and the ability to connect near-network observations onto the network

Model

In contrast to explore, the model layer focuses on confirmatory analysis. In particular, its packages focus on the estimation of spatial relationships in data with a variety of linear, generalized-linear, generalized-additive, nonlinear, multi-level, and local regression models.

  • mgwr : mgwr provides scalable algorithms for estimation, inference, and prediction using single- and multi-scale geographically-weighted regression models in a variety of generalized linear model frameworks, as well model diagnostics tools

  • spglm : spglm implements a set of generalized linear regression techniques, including Gaussian, Poisson, and Logistic regression, that allow for sparse matrix operations in their computation and estimation to lower memory overhead and decreased computation time.

  • spint : spint provides a collection of tools to study spatial interaction processes and analyze spatial interaction data. It includes functionality to facilitate the calibration and interpretation of a family of gravity-type spatial interaction models, including those with production constraints, attraction constraints, or a combination of the two.

  • spreg : spreg supports the estimation of classic and spatial econometric models. Currently it contains methods for estimating standard Ordinary Least Squares (OLS), Two Stage Least Squares (2SLS) and Seemingly Unrelated Regressions (SUR), in addition to various tests of homokestadicity, normality, spatial randomness, and different types of spatial autocorrelation. It also includes a suite of tests for spatial dependence in models with binary dependent variables.

  • spvcm : spvcm provides a general framework for estimating spatially-correlated variance components models. This class of models allows for spatial dependence in the variance components, so that nearby groups may affect one another. It also also provides a general-purpose framework for estimating models using Gibbs sampling in Python, accelerated by the numba package.

  • tobler : tobler provides functionality for for areal interpolation and dasymetric mapping. Its name is an homage to the legendary geographer Waldo Tobler a pioneer of dozens of spatial analytical methods. tobler includes functionality for interpolating data using area-weighted approaches, regression model-based approaches that leverage remotely-sensed raster data as auxiliary information, and hybrid approaches.

  • access : access aims to make it easy for analysis to calculate measures of spatial accessibility. This work has traditionally had two challenges: [1] to calculate accurate travel time matrices at scale and [2] to derive measures of access using the travel times and supply and demand locations. access implements classic spatial access models, allowing easy comparison of methodologies and assumptions.

  • spopt: spopt is an open-source Python library for solving optimization problems with spatial data. Originating from the original region module in PySAL, it is under active development for the inclusion of newly proposed models and methods for regionalization, facility location, and transportation-oriented solutions.

Viz

The viz layer provides functionality to support the creation of geovisualisations and visual representations of outputs from a variety of spatial analyses. Visualization plays a central role in modern spatial/geographic data science. Current packages provide classification methods for choropleth mapping and a common API for linking PySAL outputs to visualization tool-kits in the Python ecosystem.

  • legendgram : legendgram is a small package that provides "legendgrams" legends that visualize the distribution of observations by color in a given map. These distributional visualizations for map classification schemes assist in analytical cartography and spatial data visualization

  • mapclassify : mapclassify provides functionality for Choropleth map classification. Currently, fifteen different classification schemes are available, including a highly-optimized implementation of Fisher-Jenks optimal classification. Each scheme inherits a common structure that ensures computations are scalable and supports applications in streaming contexts.

  • splot : splot provides statistical visualizations for spatial analysis. It methods for visualizing global and local spatial autocorrelation (through Moran scatterplots and cluster maps), temporal analysis of cluster dynamics (through heatmaps and rose diagrams), and multivariate choropleth mapping (through value-by-alpha maps. A high level API supports the creation of publication-ready visualizations

Installation

PySAL is available through Anaconda (in the defaults or conda-forge channel) We recommend installing PySAL from conda-forge:

conda config --add channels conda-forge
conda install pysal

PySAL can also be installed using pip:

pip install pysal

As of version 2.0.0 PySAL has shifted to Python 3 only.

Users who need an older stable version of PySAL that is Python 2 compatible can install version 1.14.3 through pip or conda:

conda install pysal==1.14.3

Documentation

For help on using PySAL, check out the following resources:

Development

As of version 2.0.0, PySAL is now a collection of affiliated geographic data science packages. Changes to the code for any of the subpackages should be directed at the respective upstream repositories, and not made here. Infrastructural changes for the meta-package, like those for tooling, building the package, and code standards, will be considered.

Development is hosted on github.

Discussions of development as well as help for users occurs on the developer list as well as in PySAL's Discord channel.

Getting Involved

If you are interested in contributing to PySAL please see our development guidelines.

Bug reports

To search for or report bugs, please see PySAL's issues.

Build Instructions

To build the meta-package pysal see tools/README.md.

License information

See the file "LICENSE.txt" for information on the history of this software, terms & conditions for usage, and a DISCLAIMER OF ALL WARRANTIES.

mgwr's People

Contributors

jgaboardi avatar ljwolf avatar martinfleis avatar mtralka avatar ocefpaf avatar pareyesv avatar tayloroshan avatar tdhoffman avatar tigerhawkvok avatar weikang9009 avatar ziqi-li avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mgwr's Issues

Variable standardization and documentation

Opening this issue to further discuss whether or not we should include an option that automatically standardizes variables and if we do then how we can document that it has been done to ensure the user does not misinterpret their results. Thought we have discussed having a standardization option, it might be useful to leave this out of the API. Standardization is quite simple to do yourself, and I would assume that if you can access and utilize the command-line API for MGWR than you can also sort out your own standardization. Then it follows that if you are self-standardizing the variables than you know that it is being done and interpretation is left up to the user as usual. Also, we can have an automatic standardization option in the GUI (even if it is not part the GUI). Since the GUI has an output statement and educational materials we can update those to include notes that remind the user their variables are standardized and how to interpret their results.

Need help on "multi_bw_min" in mgwr.sel_bw

  • I had trouble learning the mgwr package. Could you help me?
  • As in help document: multi_bw_min is a list of min values used for each covariate in mgwr bandwidth search. Must be either a single value or have one value for each covariate including the intercept.
  1. How should I understand the minimum of covariates?
  2. And why set it to 2 in the following example?
  • When I set the multi_bw_min to zero in my cases, the adjust R square is 0.82, while setting it to "None" the R square is 0.49.

Follwing is the example in help document:

|  --------
 |  
 |  #basic model calibration
 |  
 |  >>> import libpysal as ps
 |  >>> from mgwr.gwr import MGWR
 |  >>> from mgwr.sel_bw import Sel_BW
 |  >>> data = ps.io.open(ps.examples.get_path('GData_utm.csv'))
 |  >>> coords = list(zip(data.by_col('X'), data.by_col('Y')))
 |  >>> y = np.array(data.by_col('PctBach')).reshape((-1,1))
 |  >>> rural = np.array(data.by_col('PctRural')).reshape((-1,1))
 |  >>> fb = np.array(data.by_col('PctFB')).reshape((-1,1))
 |  >>> african_amer = np.array(data.by_col('PctBlack')).reshape((-1,1))
 |  >>> X = np.hstack([fb, african_amer, rural])
 |  >>> X = (X - X.mean(axis=0)) / X.std(axis=0)
 |  >>> y = (y - y.mean(axis=0)) / y.std(axis=0)
 |  >>> selector = Sel_BW(coords, y, X, multi=True)
 |  >>> selector.search(multi_bw_min=[2])
 |  [92.0, 101.0, 136.0, 158.0]
 |  >>> model = MGWR(coords, y, X, selector, fixed=False, kernel='bisquare', sigma2_v1=True)
 |  >>> results = model.fit()
 |  >>> print(results.params.shape)
 |  (159, 4)

import libpysal error for Windows

ImportError Traceback (most recent call last)
~\miniconda3\envs\da35\lib\site-packages\libpysal\cg\alpha_shapes.py in
23 try:
---> 24 import pygeos
25

ImportError: No module named 'pygeos'

During handling of the above exception, another exception occurred:

NameError Traceback (most recent call last)
in
----> 1 import libpysal as ps

~\miniconda3\envs\da35\lib\site-packages\libpysal_init_.py in
25 Tools for creating and manipulating weights
26 """
---> 27 from . import cg
28 from . import io
29 from . import weights

~\miniconda3\envs\da35\lib\site-packages\libpysal\cg_init_.py in
9 from .sphere import *
10 from .voronoi import *
---> 11 from .alpha_shapes import *

~\miniconda3\envs\da35\lib\site-packages\libpysal\cg\alpha_shapes.py in
25
26 HAS_PYGEOS = True
---> 27 except ModuleNotFoundError:
28 HAS_PYGEOS = False
29

NameError: name 'ModuleNotFoundError' is not defined

MGWR has no argument 'family'

I am trying to fit a mgwr model for binary Y variable. As mentioned in the paper (section 3.4), I specify family=Binomial() in the Sel_BW and the MGWR object. The MGWR object throws an error that says unexpected argument 'family' (screenshot attached). The paper mentions this for GWR object though, I am assuming that MGWR too supports fitting on binary Y variable. Does MGWR not support binomial model or is there some other way to pass this?

mgwr_family_error

clearwater example files

We are in the process of moving the large example datasets out of the source install for libpysal. The clearwater example data set looks to be consumed directly from mgwr rather than from libpysal:

(base) pysal/pysal - [master●] » cd model
(base) pysal/model - [master●] » grep -r clearwater .
./mgwr/tests/test_gwr.py:            os.path.dirname(__file__), 'clearwater/landslides.csv')
./mgwr/tests/test_gwr.py:                'clearwater/clearwater_BS_F_listwise.csv'))
./mgwr/tests/test_gwr.py:                'clearwater/clearwater_BS_NN_listwise.csv'))
./mgwr/tests/test_gwr.py:                'clearwater/clearwater_GS_F_listwise.csv'))
./mgwr/tests/test_gwr.py:                'clearwater/clearwater_GS_NN_listwise.csv'))
./mgwr/tests/clearwater/clearwater_BS_NN.ctl:C:\Users\IEUser\Desktop\clearwater\clearwater\landslides.csv
./mgwr/tests/clearwater/clearwater_BS_NN.ctl:summary_output: C:\Users\IEUser\Desktop\clearwater_BS_NN_summary.txt
./mgwr/tests/clearwater/clearwater_BS_NN.ctl:listwise_output: C:\Users\IEUser\Desktop\clearwater_BS_NN_listwise.csv
./mgwr/tests/clearwater/clearwater_GS_NN_summary.txt:Session control file: C:\Users\IEUser\Desktop\clearwater_GS_NN.ctl
./mgwr/tests/clearwater/clearwater_GS_NN_summary.txt:Data filename: C:\Users\IEUser\Desktop\clearwater\clearwater\landslides.csv
./mgwr/tests/clearwater/clearwater_GS_NN_summary.txt:    Listwise output file: C:\Users\IEUser\Desktop\clearwater_GS_NN_listwise.csv
./mgwr/tests/clearwater/clearwater_BS_NN_summary.txt:Session control file: C:\Users\IEUser\Desktop\clearwater_BS_NN.ctl
./mgwr/tests/clearwater/clearwater_BS_NN_summary.txt:Data filename: C:\Users\IEUser\Desktop\clearwater\clearwater\landslides.csv
./mgwr/tests/clearwater/clearwater_BS_NN_summary.txt:    Listwise output file: C:\Users\IEUser\Desktop\clearwater_BS_NN_listwise.csv

So this means for the pysal meta package we are currently installing two copies of this dataset.

I would like to propose that we rewrite the mwgr tests that use this dataset to pull from a remote repos so we don't have to include it in the source distribution.

I'm happy to do a pr into mgwr to implement that once the libpysal.examples refactor is done, but I wanted to put this on the radar screen and get feedback.

Allow user-set BW's for MGWR

So right now we can set a min and max bw but it applies to all of the underlying gwr routines during the bw search and this means that we cannot set individual bw's for each covariate. For instance, we can force the bw min and max to be 20 but then we get a bandwidth of 20 for each covariate. It would be good to have a mechanism to individually set the bandwidth for each covariate, in case a user wants to manually explore relationships. I also needed to do this for the standard error simulations, so I came up with awkward hack but haven't yet figured out to best pull it through the API.

The one difference between gwr and mgwr is that I don't think we set the bw manually in mgwr the same way we can in gwr. In gwr, you can simply select the bandwidth. But in mgwr, you need the results of the bandwidth search procedure, like the partial residuals that will be unique to the combination of covariates, their starting bandwidth search values, and potentially the ordering of the variables (has anyone checked this out yet?). So I think it makes sense to have the manual bandwidth definition for mgwr be set as a vector of mins and maxs in the search procedure.

docs: document critical_tval

I think the docs on critical_tval are unclear. I expected critical_tval(0.05) to produce the t value for a two-tailed test corrected for multiple testing, but it actually is just a passthrough to t.ppf (with a function to account for two-tailed tests). It should be documented that you need to use adj_alpha to account for multiple testing.

kth out of bounds

hello guys. does anyone know how to do it? I've tried every kernel but it didn't work. I would appreciate it if you'd love to help me.
this is my first use of github so i don't know how to upload my code except copy. Sorry about that.
here is my code:

import numpy as np
import pysal as ps
from mgwr.gwr import GWR, MGWR
from mgwr.sel_bw import Sel_BW
import pandas as pd
georgia_data = pd.read_csv('C:/Users/DELL/Desktop/georgia/GData_utm.csv')                             
g_y = georgia_data['17s'].values
g_X = georgia_data[['17sr', '17gas_proportion', '17energy_intensity', '17structure', '17per_gdp', '17population']].values
u = georgia_data['X']
v = georgia_data['Y']
g_coords=list(zip(u,v))
bw=Sel_BW(g_coords, g_y, g_X, kernel='gaussian').search(criterion='AIC')
print(bw)

ValueError Traceback (most recent call last)
in
10 v = georgia_data['Y']
11 g_coords=list(zip(u,v))
---> 12 bw=Sel_BW(g_coords, g_y, g_X, kernel='gaussian').search(criterion='AIC')
13 print(bw)

F:\anaconda\lib\site-packages\mgwr\sel_bw.py in search(self, search_method, criterion, bw_min, bw_max, interval, tol, max_iter, init_multi, tol_multi, rss_score, max_iter_multi, multi_bw_min, multi_bw_max, bws_same_times, pool, verbose)
317 -1] #scalar, optimal bw from initial gwr model
318 else:
--> 319 self._bw()
320 self.sel_hist = self.bw[-1]
321

F:\anaconda\lib\site-packages\mgwr\sel_bw.py in _bw(self)
335 self.constant)
336 delta = 0.38197 #1 - (np.sqrt(5.0)-1.0)/2.0
--> 337 self.bw = golden_section(a, c, delta, gwr_func, self.tol,
338 self.max_iter, self.int_score,
339 self.verbose)

F:\anaconda\lib\site-packages\mgwr\search.py in golden_section(a, c, delta, function, tol, max_iter, int_score, verbose)
60 score_b = dict[b]
61 else:
---> 62 score_b = function(b)
63 dict[b] = score_b
64 if verbose:

F:\anaconda\lib\site-packages\mgwr\sel_bw.py in (bw)
324
325 def _bw(self):
--> 326 gwr_func = lambda bw: getDiag[self.criterion](GWR(
327 self.coords, self.y, self.X_loc, bw, family=self.family, kernel=
328 self.kernel, fixed=self.fixed, constant=self.constant, offset=self.

F:\anaconda\lib\site-packages\mgwr\gwr.py in fit(self, ini_params, tol, max_iter, solve, lite, pool)
333 rslt = map(self._local_fit, range(m)) #sequential
334
--> 335 rslt_list = list(zip(*rslt))
336 influ = np.array(rslt_list[0]).reshape(-1, 1)
337 resid = np.array(rslt_list[1]).reshape(-1, 1)

F:\anaconda\lib\site-packages\mgwr\gwr.py in _local_fit(self, i)
246 Local fitting at location i.
247 """
--> 248 wi = self._build_wi(i, self.bw).reshape(-1, 1) #local spatial weights
249
250 if isinstance(self.family, Gaussian):

F:\anaconda\lib\site-packages\mgwr\gwr.py in _build_wi(self, i, bw)
234
235 try:
--> 236 wi = Kernel(i, self.coords, bw, fixed=self.fixed,
237 function=self.kernel, points=self.points,
238 spherical=self.spherical).kernel

F:\anaconda\lib\site-packages\mgwr\kernels.py in init(self, i, data, bw, fixed, function, eps, ids, points, spherical)
54 self.bandwidth = float(bw)
55 else:
---> 56 self.bandwidth = np.partition(
57 self.dvec,
58 int(bw) - 1)[int(bw) - 1] * eps #partial sort in O(n) Time

<array_function internals> in partition(*args, **kwargs)

F:\anaconda\lib\site-packages\numpy\core\fromnumeric.py in partition(a, kth, axis, kind, order)
746 else:
747 a = asanyarray(a).copy(order="K")
--> 748 a.partition(kth, axis=axis, kind=kind, order=order)
749 return a
750

ValueError: kth(=62) out of bounds (30)

Standard errors and t-vals

Need to add attributes/methods to MGWRResults class for the standard errors and t-vals, as well as those used to compute them, like the hat matrix, EDOF, and sigma2.

Spatial Variation Monte Carlo

Hi!

Is it possible to make the spatial variation Monte Carlo more efficient by using multiprocessing? I've been working on it myself, but I'm clearly doing something wrong since each set of std's on the Georgia data comes back exactly the same. Since it is important to be able to replicate runs using a seed, I tried taking the randomization of the coordinates out and creating a giant list of coordinates to feed to multiple processors and then collect the results. With the 8 cores on my PC, even 1,000 iterations should be done in a reasonable amount of time. Unfortunately, it just doesn't seem to be doing what I hoped. Any suggestions?

Thanks!
Allan

LinAlgError with Sel_BW and fixed=True

Getting a warning and error with a bandwidth selection search using the below code. There is no error when using adaptive bandwidth.

bw = Sel_BW(coords_mercator, y, X, fixed=True).search()

C:\Users\user\AppData\Local\Continuum\anaconda3\lib\site-packages\spglm\iwls.py:37: LinAlgWarning: Ill-conditioned matrix (rcond=9.61489e-19): result may not be accurate.
xtx_inv_xt = linalg.solve(xtx, xT)


LinAlgError Traceback (most recent call last)
in
1 # optimal bandwith selection search
2 # default golden section search using AICc criterion
----> 3 bw = Sel_BW(coords_mercator, y, X, fixed=True).search()
4 bw

~\AppData\Local\Continuum\anaconda3\lib\site-packages\mgwr\sel_bw.py in search(self, search_method, criterion, bw_min, bw_max, interval, tol, max_iter, init_multi, tol_multi, rss_score, max_iter_multi, multi_bw_min, multi_bw_max, bws_same_times, pool, verbose)
316 -1] #scalar, optimal bw from initial gwr model
317 else:
--> 318 self._bw()
319
320 self.pool = None

~\AppData\Local\Continuum\anaconda3\lib\site-packages\mgwr\sel_bw.py in _bw(self)
335 self.bw = golden_section(a, c, delta, gwr_func, self.tol,
336 self.max_iter, self.int_score,
--> 337 self.verbose)
338 elif self.search_method == 'interval':
339 self.bw = equal_interval(self.bw_min, self.bw_max, self.interval,

~\AppData\Local\Continuum\anaconda3\lib\site-packages\mgwr\search.py in golden_section(a, c, delta, function, tol, max_iter, int_score, verbose)
60 score_b = dict[b]
61 else:
---> 62 score_b = function(b)
63 dict[b] = score_b
64 if verbose:

~\AppData\Local\Continuum\anaconda3\lib\site-packages\mgwr\sel_bw.py in (bw)
325 self.coords, self.y, self.X_loc, bw, family=self.family, kernel=
326 self.kernel, fixed=self.fixed, constant=self.constant, offset=self.
--> 327 offset, spherical=self.spherical).fit(lite=True, pool=self.pool))
328
329 self._optimized_function = gwr_func

~\AppData\Local\Continuum\anaconda3\lib\site-packages\mgwr\gwr.py in fit(self, ini_params, tol, max_iter, solve, lite, pool)
333 rslt = map(self._local_fit, range(m)) #sequential
334
--> 335 rslt_list = list(zip(*rslt))
336 influ = np.array(rslt_list[0]).reshape(-1, 1)
337 resid = np.array(rslt_list[1]).reshape(-1, 1)

~\AppData\Local\Continuum\anaconda3\lib\site-packages\mgwr\gwr.py in _local_fit(self, i)
249
250 if isinstance(self.family, Gaussian):
--> 251 betas, inv_xtx_xt = _compute_betas_gwr(self.y, self.X, wi)
252 predy = np.dot(self.X[i], betas)[0]
253 resid = self.y[i] - predy

~\AppData\Local\Continuum\anaconda3\lib\site-packages\spglm\iwls.py in _compute_betas_gwr(y, x, wi)
35 xT = (x * wi).T
36 xtx = np.dot(xT, x)
---> 37 xtx_inv_xt = linalg.solve(xtx, xT)
38 betas = np.dot(xtx_inv_xt, y)
39 return betas, xtx_inv_xt

~\AppData\Local\Continuum\anaconda3\lib\site-packages\scipy\linalg\basic.py in solve(a, b, sym_pos, lower, overwrite_a, overwrite_b, debug, check_finite, assume_a, transposed)
214 (a1, b1))
215 lu, ipvt, info = getrf(a1, overwrite_a=overwrite_a)
--> 216 _solve_check(n, info)
217 x, info = getrs(lu, ipvt, b1,
218 trans=trans, overwrite_b=overwrite_b)

~\AppData\Local\Continuum\anaconda3\lib\site-packages\scipy\linalg\basic.py in _solve_check(n, info, lamch, rcond)
29 '.'.format(-info))
30 elif 0 < info:
---> 31 raise LinAlgError('Matrix is singular.')
32
33 if lamch is None:

LinAlgError: Matrix is singular.

Multicollinearity in MGWR

Right now we currently have several popular diagnostics for local multicollinearity in gwr: local correlation coefficients, local vif, local condition number, and local variance decomposition proportions. The local CN in gwr is simply computed on the W-transformed design matrix, as is the local VDP, so I think we can get the mgwr equivalent easily by just computing them on the W-transformed matrix that uses a unique W-transform for each column of the design. I'm less certain about how to extend the local CC and VIF to the mgwr case. Fotheringham et al. have the local CC for gwr as

image
so the denominator is quite easy to have separate weights for each of the covariates. But the numerator I am not sure about. The only thing I can think of is:

image

but I have no idea if that would be a valid covariance in the numerator or if there is another way to specify a weighted covariance with different weights for each covariate.

And I know the VIF for OLS can typically be computed from a matrix of the CC's but I don't think it would carry the same interpretation for GAM's.

rework pickles in the tests

if we can, we need to avoid using pickles in the tests if this is going to be distributed by pysal.

Serialize the parameters we need.

AttributeError: 'tuple' object has no attribute 'shape'

When I was running the example code of the project, I met the following error:
image

---> 2 mgwr_bw = mgwr_selector.search()
AttributeError: 'tuple' object has no attribute 'shape'

I only ran the cell [1]-[3] and then [5]-[6]. My python version is 3.8 on Ubuntu. The mgwr version is 2.1.1.

Please note that to read the data I unzipped the zip file and used the following code:
prenz = gp.read_file(ps.examples.get_path('prenzlauer.shp'))

index error in gwr prediction

Vinayaraj Poliyapam writes:

Thanks a lot for the pysal mgwr work!

I was using GWmodel in R earlier. I tried to use your module in python. I can fit the model, but if I facing problem while predicting when I use more samples than I used for training.

I get the following error.

IndexError: index 3699 is out of bounds for axis 0 with size 3699

Any comments on this greatly appreciated.

Output summary

It would be good to have a summary function that constructed a text statement with a summary of the GWR/MGWR results that could be printed or saves to file, such as is done in GWR4/GWmodel.

hat matrices

Wei's suggestion and the back and forth we had back in Marchish of 2017 was:

We know that a simple definition of a hat matrix is \hat{y} = S y for hat matrix S.

If \hat{y} = \sum_j^p \hat{f}_j, then maybe we can get S from expanding the estimators of \hat{f}_j, given that each is \hat{f}_j = S_j( y - \sum_{k \neq j}^p \hat{f}_k) for process-specific hat matrix S_j.

In one line:

2018-02-08-141339_1089x89_scrot

Immediate question I have is: what's y^{-1}, given it's a vector?

Strategies I've looked into include:
2018-02-08-141744_416x139_scrot
which is just 1/y diagonalized.
2018-02-08-141951_398x101_scrot
inspired by the adjoint-determinant definition of the inverse
2018-02-08-142235_352x92_scrot
where that cross-times is a elementwise product, which is about as literal an interpretation of the factor-out logic I can see.

None of this yields a hat matrix. In most cases, the second term is larger than the first term at nearly all elements, so you end up with a hat matrix with values somewhere between -4 and 0. Then, taking the dot of that and y gives you massive too large numbers. BUT their general pattern looks sort of like the predicted values.

I'll post code here I'm using to generate these values, as well as track further ruminations.

MGWR has no localR2

MGWR has no localR2, the error is: NotImplementedError( 'Not yet implemented for multiple bandwidths')

Geopandas error after the installation of mgwr

I have installed the package mgwr trough anaconda. After the installation geopandas gives problems that was not giving before the installation of mgwr. This happens everytime that I try to install it

      5 import pandas as pd
      6 import seaborn as sns
----> 7 import geopandas as gpd
      8 import mgwr
      9 from mgwr.gwr import GWR

/anaconda3/lib/python3.6/site-packages/geopandas/__init__.py in <module>
      2 from geopandas.geodataframe import GeoDataFrame
      3 
----> 4 from geopandas.io.file import read_file
      5 from geopandas.io.sql import read_postgis
      6 from geopandas.tools import sjoin

/anaconda3/lib/python3.6/site-packages/geopandas/io/file.py in <module>
      1 import os
      2 
----> 3 import fiona
      4 import numpy as np
      5 import six

/anaconda3/lib/python3.6/site-packages/fiona/__init__.py in <module>
     81     os.environ["PATH"] = os.environ["PATH"] + ";" + libdir
     82 
---> 83 from fiona.collection import BytesCollection, Collection
     84 from fiona.drvsupport import supported_drivers
     85 from fiona.env import ensure_env_with_credentials, Env

/anaconda3/lib/python3.6/site-packages/fiona/collection.py in <module>
      7 
      8 from fiona import compat, vfs
----> 9 from fiona.ogrext import Iterator, ItemsIterator, KeysIterator
     10 from fiona.ogrext import Session, WritingSession
     11 from fiona.ogrext import buffer_to_virtual_file, remove_virtual_file, GEOMETRY_TYPES

ImportError: dlopen(/anaconda3/lib/python3.6/site-packages/fiona/ogrext.cpython-36m-darwin.so, 2): Library not loaded: @rpath/libfontconfig.1.dylib
  Referenced from: /anaconda3/lib/libpoppler.78.dylib
  Reason: Incompatible library version: libpoppler.78.dylib requires version 14.0.0 or later, but libfontconfig.1.dylib provides version 13.0.0

Redundant calculation of Aj

https://github.com/pysal/gwr_private/blob/8665c8ce2459c04ba5cd9d923ac1064ba9d81729/gwr/search.py#L220

for j in range(n):
    Wj = np.diag(optim_model.W[j])
    XtW = np.dot(temp_X.T, Wj)
    XtWX_inv = np.linalg.inv(np.dot(XtW, temp_X))
    P = np.dot(XtWX_inv, XtW)
    Aj[j,:] = temp_X[j,:] * P

This loop here basically calculates hat matrix of optim_model = gwr_func(temp_y, temp_X, bw) row by row, we could just let Aj = optim_model.S to avoid the for loop and get a big speed-up.
test using GA data np.allclose(Aj, optim_model.S)yields all True.

@TaylorOshan

change imports from spreg

We're having some trouble with a few imports here.

If it's possible, can you switch from import spreg.user_output as USER to from spreg import user_output as USER? For some reason we're not sure of, our build chokes when we have imports of the general pattern:
import package.module_name as module_alias
instead of
from package import module_name as module_alias

We need to change this in the submodule contract, but it'll be present in future releases.

MGWR function not accepting the bw value or array

Creating the mgwr_model using the MGWR function requires the bandwidth parameter. However, it does not accept a value or the array as shown in the paper. The error says "AttributeError: 'numpy.ndarray' object has no attribute 'bw'" (screenshot attached). This implies that it requires the selector object as opposed to the bandwidth value or array. I passed the bandwidth selector object in place of the value and it worked! (screenshot attached)

mgwr_bw_error

mgwr_bw_resolved

MGWR does not allow for restricted-global variates.

Let's have a conventional setup of a GWR with some local and some global covariates, with X_loc being the local and X_glob being the global covariates.

the MGWR class requires you pass bws,XB,and err from the Sel_BW(coords, Y, X).search() call. But, it doesn't admit that XB is actually X_loc.dot(beta_estimate).

This means that it has no idea which columns of X correspond to those used in X_loc. If you pass a full X matrix into the MGWR class, the .fit() method will just iterate over the passed bandwidths, assume X is X_loc, and peel off the partials. If you have globals in X, then it'll only consider the first len(bw) columns.

I think I can get the behavior I want by inputting a BW vector to be very large for global covariates, but we need to accommodate this.

What's the reason why we have a separate user-facing Sel_BW class? In what instance would a user want to select a bw but not use it to fit a MGWR, especially when fitting the bws requires us to have the beta estimates?

'ValueError: kth out of bounds' in MGWR fitting

Hi!
I try to fit a MGWR model based on the data (post-process .shp file in attachment).
point_trans.zip

My code is as follow:

soil = gp.read_file('point_trans.shp')
s_y = soil['TNPC'].values.reshape((-1,1))  
s_x = soil[['SOCgkg', 'ClayPC', 'SiltPC', 'NO3Ngkg', 'NH4Ngkg']].values
s_u = soil['X']
s_v = soil['Y']
s_coords = list(zip(s_u, s_v))
s_x = (s_x - s_x.mean(axis = 0)) / s_x.std(axis = 0)
s_y = (s_y - s_y.mean(axis = 0)) / s_y.std(axis = 0)
mgwr_selector_soil = Sel_BW(s_coords, s_y, s_x, multi = True, fixed = True)
mgwr_bw_soil = mgwr_selector_soil.search()
print(mgwr_bw_soil)
mgwr_results_soil = MGWR(s_coords, s_y, s_x, mgwr_selector_soil).fit()

First, I notice that the bandwidths of 'ClayPC' and 'NH4Ngkg' (both 7484.08m) exceed the maximum distance (3742.06m) between observation points. These two bandwidths were both 3741.7m in the existing study (see Analyst A in Comber et al. 2020). I don't know if this bias is caused by my mistake. It seems that it happens when the bandwidth tends to be global.

Besides, I get the following error message when the model fitting:

ValueError                                Traceback (most recent call last)
<ipython-input-44-7199e7274476> in <module>
----> 1 mgwr_results_soil = MGWR(s_coords, s_y, s_x, mgwr_selector_soil).fit()

C:\Anaconda\envs\basemap\lib\site-packages\mgwr\gwr.py in fit(self, n_chunks, pool)
   1593                        tqdm(range(self.n_chunks), desc='Inference'))
   1594 
-> 1595         rslt_list = list(zip(*rslt))
   1596         ENP_j = np.sum(np.array(rslt_list[0]), axis=0)
   1597         CCT = np.sum(np.array(rslt_list[1]), axis=0)

C:\Anaconda\envs\basemap\lib\site-packages\mgwr\gwr.py in _chunk_compute_R(self, chunk_id)
   1520 
   1521         for i in range(n):
-> 1522             wi = self._build_wi(i, self.bw_init).reshape(-1, 1)
   1523             xT = (self.X * wi).T
   1524             P = np.linalg.solve(xT.dot(self.X), xT).dot(init_pR).T

C:\Anaconda\envs\basemap\lib\site-packages\mgwr\gwr.py in _build_wi(self, i, bw)
    236             wi = Kernel(i, self.coords, bw, fixed=self.fixed,
    237                         function=self.kernel, points=self.points,
--> 238                         spherical=self.spherical).kernel
    239         except BaseException:
    240             raise  # TypeError('Unsupported kernel function  ', kernel)

C:\Anaconda\envs\basemap\lib\site-packages\mgwr\kernels.py in __init__(self, i, data, bw, fixed, function, eps, ids, points, spherical)
     56             self.bandwidth = np.partition(
     57                 self.dvec,
---> 58                 int(bw) - 1)[int(bw) - 1] * eps  #partial sort in O(n) Time
     59 
     60         self.kernel = self._kernel_funcs(self.dvec / self.bandwidth)

<__array_function__ internals> in partition(*args, **kwargs)

C:\Anaconda\envs\basemap\lib\site-packages\numpy\core\fromnumeric.py in partition(a, kth, axis, kind, order)
    744     else:
    745         a = asanyarray(a).copy(order="K")
--> 746     a.partition(kth, axis=axis, kind=kind, order=order)
    747     return a
    748 

ValueError: kth(=972) out of bounds (689)

There are 689 observation points in the data, but what does 'kth(=972)' mean? How can I deal with this problem?

Any comments on this greatly appreciated.

how to print the detail results of gwr

hello guys. does anybody could tell me how to print the detail results of gwr? Jupyter just told me that mgwr.gwr.GWRResults object at 0x000002043860BEB0. Does not mgwr pkg output a file such as csv?

i am totally a rookie... i will be grateful if someone could answer my confusion.

LinAlgError with Sel_BW

Hello guys,
I'am using MGWR and Sel_BW it throws an error but when when using a sample size > 2040 but works fine for < 2040

b_y = np.log(gdf["price"].head(2050).values.reshape((-1, 1)))
b_X = gdf["surface"].head(2050).values.reshape((-1, 1))
u = gdf['x'].head(2050)
v = gdf['y'].head(2050)
b_coords = list(zip(u, v))
b_X = (b_X - b_X.mean(axis = 0)) / b_X.std(axis = 0)
b_y = (b_y - b_y.mean(axis = 0)) / b_y.std(axis = 0)
mgwr_selector = Sel_BW(b_coords, b_y, b_X, multi=True)
print(mgwr_selector.fixed)
print(mgwr_selector.kernel)
mgwr_bw = mgwr_selector.search()
print(mgwr_bw)
mgwr_results = MGWR(b_coords, b_y, b_X, mgwr_selector).fit()
/usr/local/lib/python3.7/dist-packages/mgwr/kernels.py:60: RuntimeWarning: divide by zero encountered in true_divide
  self.kernel = self._kernel_funcs(self.dvec / self.bandwidth)
/usr/local/lib/python3.7/dist-packages/mgwr/kernels.py:60: RuntimeWarning: invalid value encountered in true_divide
  self.kernel = self._kernel_funcs(self.dvec / self.bandwidth)
---------------------------------------------------------------------------
LinAlgError                               Traceback (most recent call last)
<ipython-input-208-6365a372acef> in <module>()
----> 1 mgwr_bw = mgwr_selector.search(verbose=2)
      2 print(mgwr_bw)
      3 mgwr_results = MGWR(b_coords, b_y, b_X, mgwr_selector).fit()

12 frames
/usr/local/lib/python3.7/dist-packages/mgwr/sel_bw.py in search(self, search_method, criterion, bw_min, bw_max, interval, tol, max_iter, init_multi, tol_multi, rss_score, max_iter_multi, multi_bw_min, multi_bw_max, bws_same_times, pool, verbose)
    311 
    312         if self.multi:
--> 313             self._mbw()
    314             self.params = self.bw[3]  #params n by k
    315             self.sel_hist = self.bw[-2] #bw searching history

/usr/local/lib/python3.7/dist-packages/mgwr/sel_bw.py in _mbw(self)
    400                            self.max_iter_multi, self.rss_score, gwr_func,
    401                            bw_func, sel_func, multi_bw_min, multi_bw_max,
--> 402                            bws_same_times, verbose=self.verbose)
    403 
    404     def _init_section(self, X_glob, X_loc, coords, constant):

/usr/local/lib/python3.7/dist-packages/mgwr/search.py in multi_bw(init, y, X, n, k, family, tol, max_iter, rss_score, gwr_func, bw_func, sel_func, multi_bw_min, multi_bw_max, bws_same_times, verbose)
    221                 bw = bws[j]
    222             else:
--> 223                 bw = sel_func(bw_class, multi_bw_min[j], multi_bw_max[j])
    224                 gwr_sel_hist.append(deepcopy(bw_class.sel_hist))
    225 

/usr/local/lib/python3.7/dist-packages/mgwr/sel_bw.py in sel_func(bw_func, bw_min, bw_max)
    395                 search_method=search_method, criterion=criterion,
    396                 bw_min=bw_min, bw_max=bw_max, interval=interval, tol=tol,
--> 397                 max_iter=max_iter, pool=self.pool, verbose=False)
    398 
    399         self.bw = multi_bw(self.init_multi, y, X, n, k, family, self.tol_multi,

/usr/local/lib/python3.7/dist-packages/mgwr/sel_bw.py in search(self, search_method, criterion, bw_min, bw_max, interval, tol, max_iter, init_multi, tol_multi, rss_score, max_iter_multi, multi_bw_min, multi_bw_max, bws_same_times, pool, verbose)
    317                 -1]  #scalar, optimal bw from initial gwr model
    318         else:
--> 319             self._bw()
    320             self.sel_hist = self.bw[-1]
    321 

/usr/local/lib/python3.7/dist-packages/mgwr/sel_bw.py in _bw(self)
    337             self.bw = golden_section(a, c, delta, gwr_func, self.tol,
    338                                      self.max_iter, self.int_score,
--> 339                                      self.verbose)
    340         elif self.search_method == 'interval':
    341             self.bw = equal_interval(self.bw_min, self.bw_max, self.interval,

/usr/local/lib/python3.7/dist-packages/mgwr/search.py in golden_section(a, c, delta, function, tol, max_iter, int_score, verbose)
     60             score_b = dict[b]
     61         else:
---> 62             score_b = function(b)
     63             dict[b] = score_b
     64             if verbose:

/usr/local/lib/python3.7/dist-packages/mgwr/sel_bw.py in <lambda>(bw)
    327             self.coords, self.y, self.X_loc, bw, family=self.family, kernel=
    328             self.kernel, fixed=self.fixed, constant=self.constant, offset=self.
--> 329             offset, spherical=self.spherical).fit(lite=True, pool=self.pool))
    330 
    331         self._optimized_function = gwr_func

/usr/local/lib/python3.7/dist-packages/mgwr/gwr.py in fit(self, ini_params, tol, max_iter, solve, lite, pool)
    333                 rslt = map(self._local_fit, range(m))  #sequential
    334 
--> 335             rslt_list = list(zip(*rslt))
    336             influ = np.array(rslt_list[0]).reshape(-1, 1)
    337             resid = np.array(rslt_list[1]).reshape(-1, 1)

/usr/local/lib/python3.7/dist-packages/mgwr/gwr.py in _local_fit(self, i)
    249 
    250         if isinstance(self.family, Gaussian):
--> 251             betas, inv_xtx_xt = _compute_betas_gwr(self.y, self.X, wi)
    252             predy = np.dot(self.X[i], betas)[0]
    253             resid = self.y[i] - predy

/usr/local/lib/python3.7/dist-packages/spglm/iwls.py in _compute_betas_gwr(y, x, wi)
     35     xT = (x * wi).T
     36     xtx = np.dot(xT, x)
---> 37     xtx_inv_xt = linalg.solve(xtx, xT)
     38     betas = np.dot(xtx_inv_xt, y)
     39     return betas, xtx_inv_xt

/usr/local/lib/python3.7/dist-packages/scipy/linalg/basic.py in solve(a, b, sym_pos, lower, overwrite_a, overwrite_b, debug, check_finite, assume_a, transposed)
    214                                                (a1, b1))
    215         lu, ipvt, info = getrf(a1, overwrite_a=overwrite_a)
--> 216         _solve_check(n, info)
    217         x, info = getrs(lu, ipvt, b1,
    218                         trans=trans, overwrite_b=overwrite_b)

/usr/local/lib/python3.7/dist-packages/scipy/linalg/basic.py in _solve_check(n, info, lamch, rcond)
     29                          '.'.format(-info))
     30     elif 0 < info:
---> 31         raise LinAlgError('Matrix is singular.')
     32 
     33     if lamch is None:

LinAlgError: Matrix is singular.

train_paris.csv
i tried changing kernel but nothing worked, i'am using only one feature the surface to predict the price,
i'am adding the file so u can reproduce the error,
thanks for your help.

enh: support patsy model formulas

similar to what I've just raised over at spreg, it would be a really nice addition to allow model specifications via patsy formulas. In this case, it would kill two birds with one stone, since I notice predict method hasnt yet been implemented and including a patsy API would go a long way towards addressing #47

I can get started working on this if folks agree, but also like spreg I'd be interested in (1) whether folks want to include this addition and (2) what a good api strategy would be like

rebuild rights access?

@sjsrey @ljwolf @TaylorOshan I am doing some housekeeping for mgwr and a build has errored out, which led me to realize that I done have permissions to restart a build on travis or mark issues/PRs for the repo. Is there an easy switch that can be filled to grant me those permissions?

adding API example for loading data from Geopandas Geodataframe

Is it possible to add some examples of how to use the package if users load data from a Geopandas geodataframe? The example provided here has multiple reshape using numpy and a bit hard to follow.

And it seems to me that the coords can't take the geometry column in a geodataframe and both X and y must be numpy array instead of pandas Series. With all due respect, this is not very user-friendly, and I wonder if this could be improved in the future release.

Large-scale data cause the server down

When the data set has about 3000 polygons/points with more than 20 attributes, the program will make the server down. My server was shutdown because of this for several times. I was running the code on a Linux server with python 3.6, mgwr 2.0.0, more than 72 CPU cores and 300G+ memories.

progress bar in GWRResults spatial_variability

Considering spatial_variability is very computationally demanding, it'd be nice to add a progress bar (tqdm) to the for-loop here:

mgwr/mgwr/gwr.py

Lines 1217 to 1225 in 5e7fa3f

for x in range(n_iters):
temp_coords = np.random.permutation(self.model.coords)
temp_sel.coords = temp_coords
temp_bw = temp_sel.search(**search_params)
temp_gwr.bw = temp_bw
temp_gwr.coords = temp_coords
temp_params = temp_gwr.fit(**fit_params).params
temp_sd = np.std(temp_params, axis=0)
SDs.append(temp_sd)

tqdm is already used here

mgwr/mgwr/search.py

Lines 202 to 209 in 5e7fa3f

try:
from tqdm.auto import tqdm #if they have it, let users have a progress bar
except ImportError:
def tqdm(x, desc=''): #otherwise, just passthrough the range
return x
for iters in tqdm(range(1, max_iter + 1), desc='Backfitting'):

so I guess the same code should work:

    try:
        from tqdm.auto import tqdm  #if they have it, let users have a progress bar
    except ImportError:

        def tqdm(x, desc=""):  #otherwise, just passthrough the range
            return x

    for x in tqdm(range(n_iters), desc="Testing"):  # Is "Testing" the right description? ¯\_(ツ)_/¯ 
        # ...

Let me know what you think. I can do the PR (If this is the case, I'd need ideas for the description text: desc="Testing" ?)

(bug, doc) `family` parameter for Poisson GWR/MGWR

The docstring for family parameter in Sel_BW class is problematic. The current implementation of the class considers this parameter as an instance while the docstring claims it to be a string. This is confusing to users (#58). We should either change the docstring to match the implementation or change the implementation to match the docstring.

GWR poisson: local variable 'aicc' referenced before assignment

I used the following codes to do the bandwidth selection for GWR Poisson but got an error saying "local variable 'aicc' referenced before assignment"

import libpysal as ps
from mgwr.sel_bw import Sel_BW
import numpy as np
from spglm.family import Poisson
data = ps.io.open(ps.examples.get_path('GData_utm.csv'))
coords = list(zip(data.by_col('X'), data.by_col('Y')))
y = np.array(data.by_col('PctBach')).reshape((-1,1))
rural = np.array(data.by_col('PctRural')).reshape((-1,1))
pov = np.array(data.by_col('PctPov')).reshape((-1,1))
african_amer = np.array(data.by_col('PctBlack')).reshape((-1,1))
X = np.hstack([rural, pov, african_amer])
bw = Sel_BW(coords, y, X, kernel='gaussian', family = Poisson).search(criterion='AICc')

Distance matrix calculation is not vectorized for lat, lon (spherical) coordinates

I'm doing a GWR model on 13,000 observations with longitude, latitudes. Instantiating the model takes over 15 minutes on a Macbook Pro 2016. I dug a bit and narrowed it down to the distance calculation.

This section of the code could be drastically faster if it was vectorized instead of doing a vanilla python loop:

mgwr/mgwr/kernels.py

Lines 82 to 84 in 3bdfdf2

for i in range(n) :
for j in range(m):
dmat[i,j] = _haversine(coords1[i][0], coords1[i][1], coords2[j][0], coords2[j][1])

I can put together a PR if you want.

Document parameter order

When including a constant in estimation (via constant=True argument to the GWR object constructor), it is not clear whether the constant is the first or the last column in GWRResults.params, GWRResults.tvalues, etc (or, I suppose, one of the ones in the middle, but that seems less likely 😉 ).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.